Feed aggregator

Cumulative Update #12 for SQL Server 2014 SP2

The 12th cumulative update release for SQL Server 2014 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Categories: Latest Microsoft News

Windows Server 2016 Reverse DNS Registration Behavior

Greetings everyone! Tim Beasley (Platforms PFE) coming back at ya from the infamous Nixa, Missouri! It’s infamous since it’s the home of Jason Bourne (Bourne Identity movies).

Anyways, I wanted to reach out to you all and quickly discuss the behavior changes of Windows Server 2016 when it comes to reverse DNS records. Don’t worry, it’s a good thing! We’ve written the code to follow RFC standards. But if you’re not aware of them, you might run into some wacky results in your environment.

During some discussions with one of my DSE customers, they had a rather large app that ultimately broke when they introduced WS2016 domain controller/DNS servers to their environment. What they saw was some unexpected behavior as the app references hostnames via reverse DNS records (PTRs). Now you might be wondering why this became an issue…

Turns out the app they use expects reverse DNS records in ALL LOWERCASE FORMAT. Basically, their application vendor did something silly, like take data from a case insensitive source and used it in a case sensitive lookup.

Before you all possibly go into panic mode, most applications are written well; they don’t care about this and work just fine. It’s the apps that were written for this specific behavior (and quite frankly don’t follow RFC standards) that could experience problems. Speaking of RFC Standards, you can read all about case insensitivity requirements per RFC 4343 here.

Let me give you an example of what it is I’m talking about here. In the below screenshot, you will see “2016-PAMSVR” as a pointer (PTR) record. This was taken from my lab environment running WS2016 1607 with all the latest patches (at this time April 2018 updates). Viewing the DNS records in the MMC, reflects uppercase and lowercase. In contrast, prior to 2016 (so 2012 R2 and lower) the behavior was different in that ALL PTRs registered show up in LOWERCASE only.

***Note, the client OS levels doing the PTR registrations does not matter. This behavior will be reflected no matter what version of Windows or other OS you use.***

Here’s another example from an nslookup perspective:

To reiterate, when dynamically registering a PTR record against a DNS Server running Windows Server 2012 R2 or older, the DNS Server will downcase the entry.

Test machine name: WiNdOwS-1709.Contoso.com

When registering it against a DNS Server running Windows Server 2016,
we keep the machine name case.

Please keep this behavior in the back of your mind when you’re introducing WS2016 Domain Controllers / DNS servers to your environments for the first time. Chances are you won’t run into any problems whatsoever. But if the stars aligned improperly and this does turn out to be an issue for you, then here are some suggestions on how to remediate it:

  1. Involve your App Vendor(s) and have them update their code the correct way, following RFC standards.
  2. What #1 says.
  3. Again, do what #1 says.
  4. If the app vendor pushes back and you absolutely have no other choice…you could update all the hostnames in your environment via PowerShell to reflect lowercase. You’d then have to clear out all reverse records and have the devices re-register once their hostnames are down-cased. An example of this can be found here. Just be careful doing this and make sure you test the PowerShell script first before deploying to a production environment!!!

Thanks for reading!

Tim Beasley…out. (for now)

Categories: Latest Microsoft News

Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on + DR on Suse 12. 3 – Recent Customer Proof of Concept

Latest Microsoft Data Platform News - Sun, 06/17/2018 - 23:33

In recent months we saw several customers with large investments into Hana technologies approach Microsoft for information about deploying large mission critical SAP applications on Azure with the Sybase ASE database.

SAP Hana customers are typically able to deploy Sybase ASE at little or no additional cost if they have licensed Hana Database.

Many of the customers that have contacted Microsoft are shutting datacenters or terminating UNIX platforms and moving ECC or BW systems in the size range of 25-45TB DB volume to Azure. An earlier blog describes some of the requirements and best practices for VLDB migrations to Azure. https://blogs. msdn. microsoft. com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

Until recently there was no simple documented straight forward installation procedure for a typical two node High-Availability pair with Synchronous replication and a third node with Asynchronous replication. This is quite a common requirement for SAP customers.

This blog is designed to supplement the existing SAP provided documentation and to provide some hints and additional information. The SAP Sybase team are continuously updating and improving the Sybase documentation, so it is always recommended to start with the official documentation and then cross reference this documentation. This document is based on real deployments from Cognizant and DXC. The latest version of Sybase & Suse were then installed in a lab test environment to provide screenshots

High Level Overview of Installation Steps

The high-level installation process for a 3 tier SAP Distributed Installation is:

  1. Read required OSS Notes, Installation Guides, Download Installation Media and the SAP on Sybase Business Suite documentation
    1. For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/
    2. SAP Software Downloads https://support. sap. com/en/my-support/software-downloads. html
    3. SWPM Download https://support. sap. com/sltoolset
    4. Sybase Release Matrixhttps://wiki. scn. sap. com/wiki/display/SYBASE/Targeted+ASE+16. 0+Release+Schedule+and+CR+list+Information
    5. Sybase Official Documentation https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US
  2. Provision Azure VMs with Suse for SAP Applications 12. 3 with Accelerated Networking Enabled
  3. Perform OS patching and preparation steps detailed below
  4. Run SWPM Distributed Install and install the ASCS Instance
  5. Export the /sapmnt NFS share
  6. Mount the /sapmnt NFS share on the Primary, Secondary and DR DB server
  7. Run SWPM Distributed Install and install the Primary DB Instance
  8. Run SWPM Distributed Install and install the Primary Application Server (Optional: add additional App servers)
  9. Perform Sybase Always-on preparation steps on Primary DB Instance
  10. Run setuphadr on Primary DB Instance
  11. Run SWPM Distributed Install and install the Secondary DB Instance
  12. Perform Sybase Always-on preparation steps on Secondary DB Instance
  13. Run setuphadr on Secondary DB Instance
  14. Run SWPM Distributed Install and install the DR DB Instance
  15. Perform Sybase Always-on preparation steps on DR DB Instance
  16. Run setuphadr on DR DB Instance
  17. Run post steps such as installing Fault Manager
Deployment Config
  1. Suse 12. 3 with latest updates
  2. Sybase 16. 03. 03
  3. SWPM version 22 or 23. SAP Kernel 7. 49 patch 500. NetWeaver ABAP 7. 50
  4. Azure Ev3 VMs with Accelerated Networking and 4 vcpu
  5. Premium Storage – each DB server has 2 x P20 disks (or more as required). App server has only a boot disk
  6. Official Sybase Documentation (some steps do not work, supplement with this blog) https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US
  7. Sample Response Files are attached here: Sybase-Sample-Response-Files. It is recommended to download and review these files
  8. Sybase Always-on does not leverage OS level clustering technologies such as Pacemaker or Windows cluster. The Azure ILB is not used. Instead the SAP workprocess is aware of the Primary and Secondary Sybase server. The DR node does not support automatic failover and this is a manual process to setup and configure SAP app servers to connect to the DR node
  9. This installation shows a “Distributed” installation. If the SAP Central Services should be highly available, follow the SAP on Azure documentation for Pacemaker
  10. Sybase Fault Manager is automatically installed on the SAP PAS during installation
  11. Be careful of Linux vs. Windows End of Life characters. Use Linux command cat -v response_file. rsIf ^M are seen then there are Windows EOL characters.

    Example:cat -v test. sh

    Output:

    Line 1 ^M

    Line 2 ^M

    Line 3 ^M

    (Note: CTRL+M is a single character in Linux, which is carriagereturn in Windows. This needs to be fixed before utilizing the file in Linux )

        To fix the issue

            $> dos2unix test. sh

            Output

                Line 1

                Line 2

                Line 3

  12. Hosts file configuration used for this deployment

    Example: <IP Address><FQDN> <SHORTNAME> <#Optional Comments>

    10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

    10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

    10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

    10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

    Common Prepare Steps on all Suse Servers

sudo zypper install -y glibc-32bit

sudo zypper install -y libgcc_s1-32bit

#these two glib 32bit are mandatory otherwise Always-on will not work

sudo zypper up -y

Note : It is mandatory to reboot the server if kernel patches are applied.

#resize the boot disk. The default linux root disk of 30GB is too small. Shutdown the VM and edit the disks in Azure Portal or Powershell. Increase the size of the disk to 60-100GB. Restart the VM and run the commands below. There is no benefit or advantage to provisioning an additional separate disk for a SAP application server

sudo fdisk /dev/sda

##delete the existing partition (this will not delete the data) and create [n] new primary [p] partition with defaults and write [w] config

sudo resize2fs /dev/sda2

sudo reboot

#Check Accelerated Networking is working

/sbin/ethtool -S eth0 | grep vf_

#Add these entries to the hosts file

sudo vi /etc/hosts

10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

 #edit the waagent to create a swapfile

sudo vi /etc/waagent. conf

line to look for>>

ResourceDisk. EnableSwap=n

ResourceDisk. SwapSizeMB=

<<

Modify the above values Note : Swap size must be given in MB size only.

#enable the swapfile and set a size of 2GB or more. Example:

ResourceDisk. EnableSwap=y

ResourceDisk. SwapSizeMB=2000

Once done restart of the agent is necessary to get the swap file up and active.

sudo systemctl restart waagent

Other Services to be enabled and restarted are:

sudo systemctl restart nfs-server

sudo systemctl enable nfs-server

sudo systemctl status uuidd

sudo systemctl enable uuidd

sudo systemctl start uuidd

sudo systemctl status uuidd

##run sapcar and unpack SWPM 22 or 23

sapcar -xvf SWPM10SP22_7-20009701. SAR

SAP APP Server ASCS Install

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open a web browser from a Management Server and enter the Suse os-user name and password https://10. 1. 0. 10:4237/sapinst/docs/index. html

##after install exportthe NFS Share for /sapmnt

sudo vi /etc/exports

#add this line /sapmnt*(rw,no_root_squash)

## open port 2049 for nfs on NSG if required [by default VMs on same vnet can talk to each other]

 sudo systemctl restart nfs-server

SAP DB Instance Install

##do common preparation steps such as zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc  -> n, p, w

sudo fdisk /dev/sdd  -> n, p, w

#It is generally recommended to use LVM and create pv, lv etc here so we can test performance later with striping additional disks.

Note: if multiple disk used in creating data / Backup / Log storage, make a necessary striping enabled to get optimal performance.

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

Edit /etc/fstab and add the entries for the created disks.

Option 1:

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2:

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open web browser and start installation

SAP PAS Install

##do same preparations as ASCS for zypper and hosts file etc

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

https://10. 1. 0. 10:4237/sapinst/docs/index. html

AlwaysOn Install Primary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##Login as syb<sid> – in this case the <sid> = ase

sybdb1 /sybase% whoami

sybase

sybdb1 /sybase% pwd

/sybase

sybdb1 /sybase% ls

ASEsourcesybdb1_dma.rssybdb1_setup_hadr. rs

sybdb1 /sybase% cat sybdb1_dma.rs | grep USER_INSTALL_DIR

USER_INSTALL_DIR=/sybase/ASE

sybdb1 /sybase%

sybdb1 /sybase% source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup.bin -f /sybase/sybdb1_dma. rs -i silent

 

Note: if the command does not run put several <space> characters before the -i silent
Full path to setup.bin from ASE. ZIP file. Full path to response file otherwise it will fail with non-specific error message

 

 

##run this command to unlock the sa account. Command will fail if “-X” is not specified

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

##If any errors occur review this note

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb1_setup_hadr.rs

AlwaysOn Install Secondary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

##Login as syb<sid> – in this case the <sid> = ase

/sybase/source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb2_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb2_setup_hadr.rs

Do not restart the RMA – this is not required

AlwaysOn FM Install & Post Steps

The Sybase documentation for these steps is here.

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/286f4fc8b3ab4439b3400e97288152dc. html

The documentation is not complete. After doing the steps on the documentation link review this Note

1959660 – SYB: Database Fault Management

su – aseadm

rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:~ #su – aseadm

sybdb1:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb1:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:aseadm 3>

sybdb2:~ #su – aseadm

sybdb2:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb2:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb2:aseadm 3>

## Run AlwaysOn Tuning & Configuration script on Primary and Companion

isql -UDR_admin -PSAPHana12345 -Ssybdb1:4909

sap_tune_rs Site1, 16, 4

isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

sap_tune_rs Site2, 16, 4

sybdb2:aseadm 3> isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

1> sap_tune_rs Site2, 16, 4

2> go

TASKNAMETYPE

VALUE

———————– —————–

————————————————————

Tune Replication Server Start Time

Sun Apr 29 06:20:37 UTC 2018

Tune Replication Server Elapsed Time

00:07:11

TuneRSTask Name

Tune Replication Server

TuneRSTask State

Completed

TuneRSShort Description

Tune Replication Server configurations.

TuneRSLong Description

Waiting 180 seconds: Waiting Replication Server to fully up.

TuneRSTask Start

Sun Apr 29 06:20:37 UTC 2018

TuneRSTask End

Sun Apr 29 06:27:48 UTC 2018

TuneRSHostname

sybdb2

(9 rows affected)

## On the APP server only

sudo vi . dbenv. csh

setenv dbs_syb_ha 1

setenv dbs_syb_server sybdb1:sybdb2

## Restart the SAP App server

sapcontrol -nr 00 -function StopSystem ALL

sapcontrol -nr 00 -function StartSystem ALL

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/41b39cb667664dc09d2d9f4c87b299a7. html

sybapp1:aseadm 6> rsecssfx list

|———————————————————————————|

| Record Key | Status | Time Stamp of Last Update |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_PASSWORD | Encrypted| 2018-04-2903:07:11UTC |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_USER | Plaintext| 2018-04-2903:07:07UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_PASSWORD | Encrypted| 2018-04-2906:18:26UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_USER | Plaintext| 2018-04-2906:18:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_PASSWORD | Encrypted| 2018-04-2903:07:19UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_USER | Plaintext| 2018-04-2903:07:14UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_PASSWORD | Encrypted| 2018-04-2903:07:42UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_USER | Plaintext| 2018-04-2903:07:37UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_PASSWORD| Encrypted| 2018-04-2903:07:27UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_USER| Plaintext| 2018-04-2903:07:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_PASSWORD | Encrypted| 2018-04-2903:07:34UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_USER | Plaintext| 2018-04-2903:07:30UTC |

|———————————————————————————|

| SYSTEM_PKI/PIN | Encrypted| 2018-04-2722:36:39UTC |

|———————————————————————————|

| SYSTEM_PKI/PSE | Encrypted (binary) | 2018-04-2722:36:45UTC |

|———————————————————————————|

Summary

——-

ActiveRecords : 14 (Encrypted: 8, Plain: 6, Wrong Key: 0, Error: 0)

Defunct Records : 12 (180+ days: 0; Show: “list -withHistory”, Remove: “compact”)

## Run the Fault Manager Installation steps on the SAP PAS application server

sybapp1:aseadm 24> pwd

/sapmnt/ASE/exe/uc/linuxx86_64

sybapp1:aseadm 25> whoami

aseadm

sybapp1:aseadm 26> . /sybdbfm install

replication manager agent user DR_admin and password set in Secure Store.

Keep existing values (yes/no)? (yes)

SAPHostAgent connect user: (sapadm)

Enter password for user sapadm.

Password:

Enter value for primary database host: (sybdb1)

Enter value for primary database name: (ASE)

Enter value for primary database port: (4901)

Enter value for primary site name: (Site1)

Enter value for primary database heart beat port: (13777)

Enter value for standby database host: (sybdb2)

Enter value for standby database name: (ASE)

Enter value for standby database port: (4901)

Enter value for standby site name : (Site2)

Enter value for standby database heart beat port: (13787)

Enter value for fault manager host: (sybapp1)

Enter value for heart beat to heart beat port: (13797)

Enter value for support for floating database ip: (no)

Enter value for use SAP ASE Cockpit if it is installed and running: (no)

installation finished successfully.

Restart the SAP Instance – FM is added to the ASCS start profile

sybapp1:aseadm 32> sybdbfm status

fault manager running, pid = 4338, fault manager overall status = OK, currently executing in mode PAUSING

*** sanity check report (5)***.

node 1: server sybdb1, site Site1.

db host status: OK.

db status OK hadr status PRIMARY.

node 2: server sybdb2, site Site2.

db host status: OK.

db status OK hadr status STANDBY.

replication status: SYNC_OK.

AlwaysOn Install 3rd Node (DR) Async

Official SAP Sybase documentation and Links:

https://blogs. sap. com/2018/04/19/high-availability-disaster-recovery-3-node-hadr-with-sap-ase-16. 0-sp03/

Documentation https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US

https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US/6ca81e90696e4946a68e9257fa2d3c31. html

1. Install the DB host using SWPM in the same way as the companion host

2. Copy the companion host response file

3. Duplicate the section with all the COMP entries and add it at the bottom and rename at section of the newly copied COMPs to DR (for example). Leave the old COMP and PRIM entries as is.

4. Change the setup site to DR

5. All other entries from PRIM and COMP must remain the same since the setuphadr run for 3rd node needs to know about previous 2 hosts.

6. Execute setuphadr

Review the Sample Response File attached to this blog

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

Note : when multiple disks are added for data/log/backup to create a single volume, use right striping method to get better performance

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

(for log use –l 32 )

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

edit /etc/fstab and add the entries for the created disks.

Option 1 :

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2 :

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

Note: mount points are visible only when the folders are accessed in df –h command when auto mount is enabled.

##create a directory for the source files.

sudo mkdir -p /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

## Install the DMA on the DR Node

##Login as syb<sid> – in this case the <sid> = ase

source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb3_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

sybdb3 /sybase% uname -a

Linux sybdb3 4. 4. 120-92. 70-default #1 SMP Wed Mar 14 15:59:43 UTC 2018 (52a83de) x86_64 x86_64 x86_64 GNU/Linux

sybdb3 /sybase% whoami

sybase

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

sybdb3 /sybase% setuphadr /sybase/sybdb3_setup_hadr.rs

AlwaysOn Testing & Useful Command Syntax

In the section below planned and unplanned failovers as well as monitoring commands are used.

It is recommended to review the Sybase documentation and also to review these SAP Notes:

1982469 – SYB: Updating SAP ASE with saphostctrl

1959660 – SYB: Database Fault Management

2179305 – SYB: Usage of saphostctrl for SAP ASE and SAP Replication Server

## Check if Fault Manager is running on the SAP PAS with this command

ps -ef | grep sybdbfm

executable in /usr/sap/<SID>/ASCS00/work

sybdbfm is copied to sybdbfm. sap<SID>_ASCS00

cd /usr/sap/<SID>/ASCS00/work

. /sybdbfm. sapASE_ASCS00 status

. /sybdbfm. sapASE_ASCS00 hibernate

. /sybdbfm. sapASE_ASCS00 resume

login as syb<sid> in this case sybase

## Login to the RMA

isql -UDR_admin -P<<password>> -SASE_RMA_Site1 -I DM/interfaces -X -w999

## to see all the components that are running

sap_version all

go

## to see the status of a replication path

sap_status path

go

## to see the status of resources

sap_status resource

go

## Login to ASE

The syntax “-I DM/interfaces” does a lookup in the Sybase AlwaysOn configuration database to find the host and TCP port

isql -UDR_admin -P<<password>> -SASE_Site1 -I DM/interfaces -X-w999

## to clear down the transaction log run this command

dump tran ASE with truncate_only

go

## to show freespace in DB

sp_helpdb ASE

go

## Transaction log backups are needed on all replicas otherwise the Trans Log will become full

## to start/stop/get info on Sybase DB (and all required components for Always on like RMA) – run this on the DB host

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE -dbtype syb

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE_REP -dbtype syb

## to get Sybase DB status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function GetDatabaseStatus -dbname ASE -dbtype syb

## to get Sybase DB replication status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Check -updateoption TASK=REPLICATION_STATUS

## to send a trace ticket logon to RMA and execute these commands

sap_send_trace Site1

go

sap_status active

go

## during HADR testing leave tail running on the file /usr/sap/<SID>/ASCS00/work

tail -100f dev_sybdbfm

## to force a shutdown of the DB engine run the command below. Always-on will try to stop a normal shutdown of the DB

shutdown with wait nowait_hadr

go

## to do a planned failover from Primary to Companion DB the normal sequence is:

1. Failover from Primary to Companion

2. Drain logs from Primary to the DR site

3. Reverse Replication Route to start synchronization from the new Primary to the Companion and DR

— There is a new command that does all these steps automatically:

/usr/sap/hostctrl/exe/saphostctrl -user sapadm – -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Execute -updateoption TASK=FAILOVER -updateoption FAILOVER_FORCE=1 -updateoption FAILOVER_TIME=300

## it is recommended to use this command. If there are errors check in the path /usr/sap/hostctrl/work for log files

##other useful commands:

## to disable/enable replication from a Site to all routes

sap_disable_replication Site1, <DB>

sap_enable_replication Site1,Site2,<DB>

## command to manually failover

sap_failover <primary>,<standby>,<timeout>, [force], [unplanned]

## Materialize is a “dump and load” to reinitialize Sybase Alwayson replica.

sap_materialize auto,Site1,Site2,master

sap_materialize auto,Site1,Site2,<SID>

Sybase How To & Links

Customers familiar with SQL Server AlwaysOn should note that although it is possible to take a DB or Log backup from a replica, these backups are not compatible between Primary <-> Replica databases. It is also a requirement to run transaction log backups on the replica nodes unlike SQL Server.

SAP Notes:

2134316 – Can SAP ASE run in a cloud environment? – SAP ASE

1554717 – SYB: Planning information for SAP on ASE

1706801 – SYB: SAP ASE released for virtual systems

1590719 – SYB: Updates for SAP Adaptive Server Enterprise (SAP ASE)

1959660 – SYB: Database Fault Management

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

2489781 – SAP ASE 16. 0 SP03 Supported Operating Systems and Versions

DBA Cockpit doesn’t work by default after installation.

Setup DBA Cockpit as per:
2293673 – SYB: DBA Cockpit Correction Collection SAP Basis 7. 50

1605680 – SYB: Troubleshoot the setup of the DBA Cockpit on Sybase ASE

1245200 – DBA: ICF Service Activation for WebDynpro DBA Cockpit

For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/

SAP Software Downloads https://support. sap. com/en/my-support/software-downloads. html

SWPM Download https://support. sap. com/sltoolset

Sybase Release Matrixhttps://wiki. scn. sap. com/wiki/display/SYBASE/Targeted+ASE+16. 0+Release+Schedule+and+CR+list+Information

Sybase Official Documentation https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US

Special thanks to Wajeeh Samdani from SAP Sybase Development in Walldorf

Special thanks to Cognizant SAP Cloud Team for their input and review of this blog

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Categories: Latest Microsoft News

SAP on Azure: General Update – June 2018

Latest Microsoft Data Platform News - Sun, 06/17/2018 - 23:31

SAP and Microsoft are continuously adding new features and functionalities to the Azure platform. The key objective of the Azure cloud platform is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. M-Series is Certified for SAP Hana – S4, BW4, BWoH and SoH up to 3.8TB RAM

SAP Hana customers can run S4HANA, BW4Hana, BW on Hana and Business Suite on Hana in Production in many of the Azure datacenters in Americas, Europe and Asia. https://azure.microsoft.com/en-us/global-infrastructure/regions/ More information in this blog: https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Requirements: Write Accelerator must be used for the Transaction Log disk only. Suse 12.3 or RHEL 7.3 or higher.

The SAP IaaS catalogue now includes M-series and Hana Large Instances

More information on the Write Accelerator can be found here:

https://azure.microsoft.com/en-us/blog/write-accelerator-for-m-series-virtual-machines-now-generally-available/

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/how-to-enable-write-accelerator

The central Note for SAP Hana on Azure VMs is Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types https://launchpad.support.sap.com/#/notes/0001928533

The Note for Hana Large Instances for memory up to 20TB scale up is Note 2316233 – SAP HANA on Microsoft Azure (Large Instances) https://launchpad.support.sap.com/#/notes/2316233

Summary of M-Series VMs for SAP NetWeaver and SAP Hana

M-Series running SAP Hana

1. Transaction Log disk(s) must have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

2. Azure Write Accelerator must not be activated on Data disks

3. Azure Accelerated Networking should always be enabled on M-Series VMs running Hana

4. The precise OS releases that are supported for Hana can be found in the SAP Hana IaaS Directory https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

5. Where there is any discrepancy between any SAP OSS Note such as 1928533 or other source the SAP Hana IaaS Directory takes precedence

M-Series running AnyDB (SQL, Oracle, Sybase etc)

1. Windows 2012 2016, Suse 12.3, RH 7.x and Oracle Linux are all supported

2. Transaction Log disk(s) should have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

3. Azure Write Accelerator must not be activated on Data disks

4. Azure Accelerated Networking should always be enabled on M-Series VMs running AnyDB

5. If running Oracle Linux the RHEL kernel must be used as at June 2018 instead of Oracle UEK4 kernel. Oracle UEK5 will support Accelerated Networking with Oracle 7.5

Additional Small Certified M-Series VMs

Small M-Series VMs are certified:

1. M64ls with 64vCPU and 512GB

2. M32ls with 32vCPU and 256GB

3. M32ts with 32vCPU and 192GB

Disk configuration and additional information on these new smaller M-Series VMs can be found here https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

2. SAP NetWeaver on Windows Hyper-V 2016 Fully Supported

Windows Server 2016 Hyper-V is now fully supported as a Hypervisor for SAP applications running on Windows.

Hyper-V 2016 is a powerful component for customers wishing to deploy a hybrid cloud environment with some components on-premises and some components on Azure.

A special fix for Windows 2016 is required before using Hyper-V 2016. Apply the latest update for “Windows 10/Windows Server 2016 1607” but at least this patch level https://support.microsoft.com/en-us/help/4093120/windows-10-update-kb4093120

https://wiki.scn.sap.com/wiki/display/VIRTUALIZATION/SAP+on+Microsoft+Hyper-V

1409604 – Virtualization on Windows: Enhanced monitoring https://launchpad.support.sap.com/#/notes/0001409604

1409608 – Virtualization on Windows https://launchpad.support.sap.com/#/notes/0001409608

More information on Windows 2016 for SAP is here:

https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/07/windows-2016-is-now-generally-available-for-sap/

https://blogs.sap.com/2017/05/06/performance-tuning-guidelines-for-windows-server-2016-hyper-v/

3. Build Disaster Recovery Capability within Azure Regions with Availability Zones

Availability Zones are Generally Available in many Azure Regions and are being deployed to most Azure regions shortly.

Availability Zones are physically separated datacenters with independent network and power infrastructure

VMs running in an Availability Zone achieve an SLA of 99.99% https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_8/

A very good overview of Availability Zones on Azure and the interaction with other Azure components is detailed in this blog

https://blogs.msdn.microsoft.com/igorpag/2018/05/03/azure-availability-zones-quick-tour-and-guide/

More information below

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-availability-zones

https://blogs.msdn.microsoft.com/igorpag/2017/10/08/why-azure-availability-zones/

https://azure.microsoft.com/en-us/global-infrastructure/availability-zones/

A typical topology is depicted below.


The Azure Standard Internal Load Balancer is used for workloads that are distributed across Availability Zones. Even when deploying VMs into an Azure region that does not yet have Availability Zones it is recommended to use the Standard Internal Load Balancer in Zone-redundant mode. This allows

To view the VM types that are available in each Availability Zone in a Datacenter run this PowerShell command

Get-AzureRmComputeResourceSku | where {$_.Locations.Contains(“southeastasia”)-and $_.ResourceType.Equals(“virtualMachines”) -and $_.LocationInfo[0].Zones -ne $null }

Similar information can be seen in the Azure Portal when creating a VM

Customers building High Availability solutions with Suse 12.x operating system can review documentation on how to deploy single SID and Multi SID Suse Pacemaker clusters

The Microsoft documentation discusses the scenario “Microsoft SAP Fencing Agent + single iSCSI” STONITH configuration

An alternative deployment scenario “Two iSCSI devices in different Availability Zones”.

A Suse bug fix may be required to configure two iSCSI devices:
https://www.suse.com/support/kb/doc/?id=7022477

https://ptf.suse.com/f2cf38b50ed714a8409693060195b235/sles12-sp3-hae/14410/x86_64/20171219  (a user id is needed)

A recommended deployment configuration is to place each iSCSI source in a different Availability Zone.

4. Sybase ASE 16.3 PL3 “Always-on” on Azure – 2 Node HA + 3rd Async Node for DR

A new blog with step-by-step instructions on how to install and configure a 2 node HA Sybase cluster with a third node for DR has been released.

https://blogs.msdn.microsoft.com/saponsqlserver/2018/05/18/installation-procedure-for-sybase-16-3-patch-level-3-always-on-dr-on-suse-12-3-recent-customer-proof-of-concept

5. Very Useful Links for SAP on Azure Consultants

The listing below is a comprehensive collection of links that has proved very useful for many consultants working at System Integrators.

SAP on Azure Reference Architectures

SAP S/4HANA for Linux Virtual Machines on Azure https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-s4hana

Run SAP HANA on Azure Large Instances https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/hana-large-instances

Deploy SAP NetWeaver (Windows) for AnyDB on Azure Virtual Machines https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-netweaver

High Availability SAP Netweaver Any DB

High-availability architecture and scenarios for SAP NetWeaver

Azure Virtual Machines high availability architecture and scenarios for SAP NetWeaver https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse

Azure infrastructure preparation for SAP NetWeaver high-availability deployment

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-file-share

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#setting-up-a-highly-available-nfs-server

Installation of an SAP NetWeaver high availability system in Azure

Install SAP NetWeaver high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-shared-disk

Install SAP NetWeaver high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-file-share

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#prepare-for-sap-netweaver-installation

High Availability SAP Hana

HANA Large Instance

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

High availability set up in SUSE using the STONITH https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/ha-setup-with-stonith

SAP HANA high availability for Azure virtual machines https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-overview

SAP HANA availability within one Azure region https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-one-region

SAP HANA availability across Azure regions https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-across-regions

Disaster Recovery

Protect a multi-tier SAP NetWeaver application deployment by using Site Recovery https://docs.microsoft.com/en-gb/azure/site-recovery/site-recovery-sap

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

Setting Up Hana System Replication on Azure Hana Large Instances https://blogs.msdn.microsoft.com/saponsqlserver/2018/02/10/setting-up-hana-system-replication-on-azure-hana-large-instances/

Monitoring

New Azure PowerShell cmdlets for Azure Enhanced Monitoring https://blogs.msdn.microsoft.com/saponsqlserver/2016/05/16/new-azure-powershell-cmdlets-for-azure-enhanced-monitoring/

The Azure Monitoring Extension for SAP on Windows – Possible Error Codes and Their Solutions https://blogs.msdn.microsoft.com/saponsqlserver/2016/01/29/the-azure-monitoring-extension-for-sap-on-windows-possible-error-codes-and-their-solutions/

Azure Extended monitoring for SAP https://blogs.msdn.microsoft.com/saponsqlserver/2014/06/24/azure-extended-monitoring-for-sap/

https://docs.microsoft.com/en-us/azure/operations-management-suite/

https://azure.microsoft.com/en-us/services/monitor/

https://azure.microsoft.com/en-us/services/network-watcher/

Automation

https://azure.microsoft.com/en-us/services/automation/

Automate the deployment of SAP HANA on Azure https://github.com/AzureCAT-GSI/SAP-HANA-ARM

Migration from on-premises DC to Azure

Transfer data with the AzCopy https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy

Azure Import/Export service https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

Very Large Database Migration to Azure https://blogs.msdn.microsoft.com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

SAP on Azure – DMO with System Move https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

SAP on Azure certification

SAP Certified IaaS Platforms https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

SAP Note #1928533 – SAP Applications on Azure: Supported Products and Azure VM types  https://launchpad.support.sap.com/#/notes/1928533

SAP certifications and configurations running on Microsoft Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications

Azure M-series VMs are now SAP HANA certified https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Backup Solutions

Azure VM backup for OS https://azure.microsoft.com/en-gb/services/backup/

HANA VM Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-guide

HANA VM backup to file https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level

HANA VM backup based on storage snapshots https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots

HANA Large Instance (HLI) Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#backup-and-restore

HLI backup based on storage snapshots https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#using-storage-snapshots-of-sap-hana-on-azure-large-instances

Use third party backup tools: Commvault, Veritas, etc.

All the major third-party backup tools are supported in Azure and have agents for SAP HANA, SQL, Oracle, Sybase etc

Commvault

Azure: https://documentation.commvault.com/commvault/v11/article?p=31252.htm

SAP HANA: https://documentation.commvault.com/commvault/v11/article?p=22305.htm

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/commvault.commvault?tab=Overview

Veritas NetBackup

Azure: https://www.veritas.com/support/en_US/article.100041400

HANA: https://www.veritas.com/content/support/en_US/doc/16226696-127422304-0/v88504823-127422304

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview

Security

Network

Logically segment subnets https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#logically-segment-subnets

Control routing behavior https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#control-routing-behavior

Enable Forced Tunneling https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-forced-tunneling

Use virtual network appliances https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-virtual-network-appliances

Deploy DMZs for security zoning https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#deploy-dmzs-for-security-zoning

Avoid exposure to the Internet with dedicated WAN links https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#avoid-exposure-to-the-internet-with-dedicated-wan-links

Optimize uptime and performance https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#optimize-uptime-and-performance

HTTP-based Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#http-based-load-balancing

External Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#external-load-balancing

Internal Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#internal-load-balancing

Use global load balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-global-load-balancing

Disable RDP/SSH Access to Azure Virtual Machines https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#disable-rdpssh-access-to-azure-virtual-machines

Enable Azure Security Center https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-azure-security-center

Securely extend your datacenter into Azure https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#securely-extend-your-datacenter-into-azure

Operational

Monitor, manage, and protect cloud infrastructure https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitor-manage-and-protect-cloud-infrastructure

Manage identity and implement single sign-on https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#manage-identity-and-implement-single-sign-on

Trace requests, analyze usage trends, and diagnose issues https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#trace-requests-analyze-usage-trends-and-diagnose-issues

Monitoring services https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitoring-services

Prevent, detect, and respond to threats https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#prevent-detect-and-respond-to-threats

End-to-end scenario-based network monitoring https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#end-to-end-scenario-based-network-monitoring

Azure Security Center https://azure.microsoft.com/en-us/blog/protect-virtual-machines-across-different-subscriptions-with-azure-security-center/

https://azure.microsoft.com/en-us/blog/how-azure-security-center-helps-detect-attacks-against-your-linux-machines/

New VM Type for single tenant isolated VM https://azure.microsoft.com/en-us/blog/new-isolated-vm-sizes-now-available/

Azure Active Directory

Azure Active Directory integration with SAP HANA https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-saphana-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud Platform Identity Authentication https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Business ByDesign https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sapbusinessbydesign-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud for Customer for SSO functionality https://blogs.sap.com/2017/08/02/azure-active-directory-integration-with-sap-cloud-for-customer-for-sso-functionality/

S/4HANA environment – Fiori Launchpad SAML Single Sign-On with Azure AD https://blogs.sap.com/2017/02/20/your-s4hana-environment-part-7-fiori-launchpad-saml-single-sing-on-with-azure-ad/

Very good rollup article on Azure Networking https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/

Special thanks to Ravi Alwani for collating these links

6. New Microsoft Features for SAP Customers

Microsoft has released many new features for SAP customers:

Azure Site Recovery Azure-2-Azure – Support for Suse 12.x has been released! https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix

Global vNet Peering – previously it was not possible to Peer vNets from other datacenters. This is now Generally Available in some datacenters and is being deployed globally. One of the biggest advantages of Global vNet Peering is that network traffic will be carried across the Azure network backbone.

https://blogs.msdn.microsoft.com/wushuai/2018/02/04/provide-cross-region-low-latency-service-based-on-azure-vnet-peering/

https://azure.microsoft.com/en-us/blog/global-vnet-peering-now-generally-available/

The new Standard Internal Load Balancer (ILB) is Availability Zone aware and has better performance than the regular ILB

https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#load-balancer

https://github.com/yinghli/azure-vm-network-performance (scroll down to review performance)

SQL Server 2016 Service Pack 2 (SP2) released https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2016-service-pack-2-sp2-released/

Linux customers are recommended to setup Azure Serial Console. This allows access to a Linux VM when the network stack is not working. This feature is the equivalent of an RS-232/COM port cable connection https://docs.microsoft.com/en-us/azure/virtual-machines/linux/serial-console

Azure Storage Explorer provides easier management of blob objects such as backups on Azure blob storage https://azure.microsoft.com/en-us/features/storage-explorer/

Azure now offers Trusted Execution Environment leveraging Intel Xeon Processors with Intel SGX technology. So far this is not tested with SAP yet, but may be validated in the future https://azure.microsoft.com/en-us/blog/azure-confidential-computing/

More information on new Network features can be found here https://azure.microsoft.com/en-us/blog/azure-networking-may-2018-announcements/

https://azure.microsoft.com/en-us/blog/monitor-microsoft-peering-in-expressroute-with-network-performance-monitor-public-preview/

7. New SAP Features

SAP has released a new version of SWPM. It is recommended to use this version for all new Installations. The tool can be downloaded from https://support.sap.com/en/tools/software-logistics-tools.html

1680045 – Release Note for Software Provisioning Manager 1.0 (recommended: SWPM 1.0 SP 23) https://launchpad.support.sap.com/#/notes/0001680045

Customers interested in automating SWPM can review 2230669 – System Provisioning Using a Parameter Input File https://launchpad.support.sap.com/#/notes/2230669

SAP has released new SAP Downwards Compatible Kernels. SAP has provided guidance to switch to using the new 7.53 kernel for all new Installations:

SAP recommends using the latest SP stack kernel (SAPEXE.SAR and SAPEXEDB.SAR), available in the Support Packages & Patches section of the SAP Support Portal https://launchpad.support.sap.com/#/softwarecenter.

For existing installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 749 PL 500. For details, see release note 2626990.
For new installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 753 PL 100. For details, see DCK note 2556153 and release note 2608318.
For AS ABAP 7.52 this is SAP Kernel 753 PL 100. For details, see release note 2608318.

2083594 – SAP Kernel 740, 741, 742, 745, 749 and 753: Versions and Kernel Patch Levels https://launchpad.support.sap.com/#/notes/2083594

2556153 – Using kernel 7.53 instead of kernel 7.40, 7.41, 7.42, 7.45, or 7.49 https://launchpad.support.sap.com/#/notes/0002556153

2350788 – Using kernel 7.49 instead of kernel 7.40, 7.41, 7.42 or 7.45 https://launchpad.support.sap.com/#/notes/0002350788

1969546 – Release Roadmap for Kernel 74x and 75x https://launchpad.support.sap.com/#/notes/1969546

https://wiki.scn.sap.com/wiki/display/SI/SAP+Kernel:+Important+News

8. Recommended Hana on Azure Disk Design Template

The Excel spreadsheet here contains a useful model template for customers planning to deploy Hana on Azure VMs.

The spreadsheet contains a sample Hana deployment on Azure M-series with details such as stripe sizes, Write Accelerator and other useful configuration settings

Further information and details can be found in the Azure documentation here: https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

The spreadsheet is a sample only and should be adapted as required, for example the Availability Zone column will likely need to be updated.

Special thanks to Archana from Cognizant for creating this template

9. Disable 8.3 Filename Generation

Very old Windows releases were limited to filenames of 8.3 characters. Some applications that call very old file handling APIs can only reference files by their 8.3 format filenames.

Up to and including Windows Server 2016 all files with more than 8.3 characters will have a 8.3 format filename created.

This operation becomes very expensive if there are very large numbers of files in a directory. Frequently SAP customers will keep job logs or interface files on the /sapmnt share and the total number of files can reach hundreds of thousands.

It is recommended to disable 8.3 filename generation on all existing Windows servers and to include disabling 8.3 filename generation as part of the standard build of new Windows servers alongside steps such as removing Internet Explorer and disabling SMB v1.0.

662452 – Poor file system performance/errors during data accesses https://launchpad.support.sap.com/#/notes/662452

https://support.microsoft.com/en-us/help/121007/how-to-disable-8-3-file-name-creation-on-ntfs-partitions

10. Hundreds of vCPU and 12TB Azure VMs Certified for SAP Hana Announced

In a blog by Corey Sanders Microsoft confirms a new generation of M-Series VMs supporting hundreds of vCPU and 12TB of RAM. The blog “Why you should bet on Azure for your infrastructure needs, today and in the future” announces the following:

1. Next Generation M-Series based on Intel Skylake CPU supporting up to 12TB of RAM

2. New Hana Large Instance TDIv5 appliances 6 TB, 12 TB and 18 TB

3. New Standard SSD based storage – suitable for backups and bulk storage https://azure.microsoft.com/en-us/blog/preview-standard-ssd-disks-for-azure-virtual-machine-workloads/

4. New smaller M-Series VMs suitable for non-production, Solution Manager and other smaller SAP Hana databases

https://azure.microsoft.com/en-us/blog/why-you-should-bet-on-azure-for-your-infrastructure-needs-today-and-in-the-future/

https://azure.microsoft.com/en-us/blog/offering-the-largest-scale-and-broadest-choice-for-sap-hana-in-the-cloud/

Miscellaneous Topics, Notes & Links

2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) https://launchpad.support.sap.com/#/notes/0002343511

Optimizing SAP for Azure https://www.microsoft.com/en-us/download/details.aspx?id=56819

Useful link on setting up LVM on Linux VMs https://docs.microsoft.com/en-us/azure/virtual-machines/linux/configure-lvm

Updated SQL Column Store documentation – recommended for all BW on SQL customers 2116639 – SQL Server Columnstore Documentation https://launchpad.support.sap.com/#/notes/0002116639

Categories: Latest Microsoft News

Update for Configuration Manager current branch, version 1802 is now available

Latest Microsoft Server Management News - Fri, 06/15/2018 - 15:05

An update rollup for System Center Configuration Manager current branch, version 1802 is now available. This update is available for installation in the Updates and Servicing node of the Configuration Manager console. Please note that if the Service Connection Point is in offline mode, you must re-import the update so that it is listed in the Configuration Manager console. Refer to the Install in-console Updates for System Center Configuration Manager topic for details.

For complete details regarding the update rollup for ConfigMgr current branch v1802, including the list of issues that are fixed, please see the following:

4163547 - Update rollup for System Center Configuration Manager current branch, version 1802 (https://support.microsoft.com/help/4163547)

Categories: Latest Microsoft News

Because it&#8217;s Friday: Olive Garden Bot

Latest Microsoft Data Platform News - Fri, 06/15/2018 - 13:18

Comedy writer Keaton Patti claims this commercial script for a US Italian restaurant chain was generated by a bot:

I forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own. Here is the first page. pic.twitter.com/CKiDQTmLeH

— Keaton Patti (@KeatonPatti) June 13, 2018

Of course this wasn't bot-generated, but "what a bot might write" is fertile ground for comedy:

I forced a bot to read over 1,000 tweets claiming to be scripts written by bots online, then asked it to write a script itself. It wrote these two pages, then hung itself. pic.twitter.com/KbFXStJuAb

— Christine Love (@christinelove) June 1, 2018

That's all from us here at the blog for this week. Have a great weekend, and we'll be back next week.

 

Categories: Latest Microsoft News

Interpreting machine learning models with the lime package for R

Latest Microsoft Data Platform News - Fri, 06/15/2018 - 12:53

Many types of machine learning classifiers, not least commonly-used techniques like ensemble models and neural networks, are notoriously difficult to interpret. If the model produces a surprising label for any given case, it's difficult to answer the question, "why that label, and not one of the others?".

One approach to this dilemma is the technique known as LIME (Local Interpretable Model-Agnostic Explanations). The basic idea is that while for highly non-linear models it's impossible to give a simple explanation of the relationship between any one variable and the predicted classes at a global level, it might be possible to asses which variables are most influential on the classification at a local level, near the neighborhood of a particular data point. An procedure for doing so is described in this 2016 paper by Ribeiro et al, and implemented in the R package lime by Thomas Lin Pedersen and Michael Benesty (and a port of the Python package of the same name). 

You can read about how the lime package works in the introductory vignette Understanding Lime, but this limerick by Mara Averick sums also things up nicely:

There once was a package called lime,
Whose models were simply sublime,
It gave explanations for their variations,
One observation at a time.

"One observation at a time" is the key there: given a prediction (or a collection of predictions) it will determine the variables that most support (or contradict) the predicted classification.

The lime package also works with text data: for example, you may have a model that classifies a paragraph of text as a sentiment "negative", "neutral" or "positive". In that case, lime will determine the the words in that sentence which are most important to determining (or contradicting) the classification. The package helpfully also provides a shiny app making it easy to test out different sentences and see the local effect of the model.

To learn more about the lime algorithm and how to use the associated R package, a great place to get started is the tutorial Visualizing ML Models with LIME from the University of Cincinnati Business Analytics R Programming Guide. The lime package is available on CRAN now, and you can always find the latest version at the GitHub repository linked below.

GitHub (thomasp): lime (Local Interpretable Model-Agnostic Explanations)

 

 

 

Categories: Latest Microsoft News

Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle

Latest Oracle Press Releases - Fri, 06/15/2018 - 05:00
Press Release Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle Invest Banca S.p.A. is the first Italian bank to implement Oracle FLEXCUBE, localized and integrated by banking outsourcer Cabel Industry S.p.A.

Redwood Shores, Calif.—Jun 15, 2018

Cabel, an IT service provider for the financial services market in Italy since 1985, and Oracle Financial Services Global Business Unit, announced the availability today of Oracle FLEXCUBE for the Italian market. Oracle FLEXCUBE is a core banking solution that has been adopted by more than 600 financial institutions around the world.

Cabel and Oracle have collaborated since 2016 to localize the Oracle FLEXCUBE solution to improve the process of marketing new products and services to the Italian market, where client requirements are evolving rapidly. In recent months, the Oracle FLEXCUBE solution has been adapted to the regulations governing the Italian banking market and is now fully able to support the typical activity of the Italian banking system.

“The attitude of the banking system towards innovation is changing and at the same time there is a growing interest in the world of fintech. Invest Banca, thanks to the Oracle FLEXCUBE solution, has taken a decisive step forward. We went live with this Open Banking Platform May 7th and Oracle FLEXCUBE now allows us to easily and efficiently integrate with a series of specialized solutions already in use by our retail and institutional clients, but moreover, it allows us to keep apace with ever more demanding banking regulations, such as MiFID, PSD2 and GDPR. It also facilitates our interaction and experimentation with the latest technology advances such as Robo-Advisor, artificial intelligence, data science, social trading and blockchain,” said Stefano Sardelli, Managing Director of Invest Banca.”

Cabel implemented the Italian version of Oracle FLEXCUBE making it possible to integrate in an already live and running banking system covering other banking operations. The solution can be outsourced or used on premise.

"This is a radically innovative solution, because it is a technology that facilitates the creation of lean products and services that are independent and based on completely different and more modern logic than traditional core banking systems in Italy,” said Francesco Bosio, President of Cabel Holding S.p.A.

“Oracle’s strategy is to work with leading local partners, who bring local domain skills to our best-in-class global solutions," said Chet Kamat, Senior Vice President, Banking, Oracle Financial Services. "Cabel is an innovation-oriented company and we chose to work with Cabel knowing it could fully utilize our modern, flexible technology to respond to the changes imposed by the digital age. As a result, Italian banks will see a significant improvement in their own productivity and market offerings—and their customers will get the benefit of excellent customer experience.”

Contact Info Stefano Cassola
Oracle
39 022 495 9032
stefano.cassola@oracle.com Sara D’Agati
Cabel Industry S.p.A.
39 339 8610096
sara.dagati@hfilms.net About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Stefano Cassola

  • 39 022 495 9032

Sara D’Agati

  • 39 339 8610096

Follow Oracle Corporate

Categories: Latest Oracle News

PowerShell Script Analyzer 1.17.1 Released!

Latest Microsoft Server Management News - Thu, 06/14/2018 - 13:55

Summary: A new version of PSScriptAnalyzer is now available with many new features, rules, fixes and improvements.

You might remember me from my previous cross-platform remoting blog post, but just to introduce myself: I am Christoph Bergmeister, a full stack .Net developer in the London area and since the start of this year I am now also an official PSScriptAnalyzer maintainer although I do not work at Microsoft.
On GitHub, you can find me as @bergmeister.

After half a year, a new version of PSScriptAnalyzer (also known as PSSA) has been published and is now available on the PSGallery.
Some of you might have been wondering what has happened.
First, the former maintainer has switched projects, therefore it took some time for finding and arranging a hand over.
PSScriptAnalyzer is now mainly being maintained by @JamesWTruher from the Microsoft side and myself as a community maintainer.
After having already contributed to the PowerShell Core project, I started doing development on PSScriptAnalyzer last autumn and since then have added a lot of new features.

New Parameters

Invoke-ScriptAnalyzer now has 3 new switch parameters:

  • -Fix (only on the -Path parameter set)
  • -ReportSummary
  • -EnableExit

The -Fix switch was the first and probably most challenging feature that I added.
Similar to how one can already get fixes for a subset of warnings (e.g. for AvoidUsingCmdletAlias) in VSCode, this feature allows to auto-fix the analysed files, which can be useful to tidy up a big code base.
When using this switch, one must still inspect the result and possibly make adaptions.
The AvoidUsingConvertToSecureStringWithPlainText rule for example will change a String to a SecureString, which means that you must create or get it in the first place.
A small warning should be given about encoding: Due to the way how the engine works, it was not possible to always conserve the encoding, therefore before checking in the changes, it is also recommended to check for a change in that in case scripts are sensitive to that.

The -ReportSummary switch was implemented first by the community member @StingyJack, thanks for that.
The idea is to see a summary, like Pester but since it writes to host, we decided to not enable it by default but rather have a switch for it to start with.
It got refined a bit later to use the same colouring for warnings/errors as currently configured in the PowerShell host.

The -EnableExit was an idea being proposed by the community member @BatmanAMA as well and the idea is to have a simpler, faster to write CI integration.
The switch will return an exit code equivalent to the number of rule violations to signal success/failure to the CI system.
Of course, it is still best practice to have a Pester test (for each file and/or rule) for it due Pester’s ability to produce result files that can be interpreted by CI systems for more detailed analysis.

New Rules AvoidAssignmentToAutomaticVariable

PowerShell has built-in variables, also known as automatic variables.
Some of them are read-only and PowerShell would throw an error at runtime.
Therefore, the rule warns against assignment of those variables.
Some of them, like e.g. $error are very easy to assign to by mistake, especially for new users who are not aware.
In the future more automatic variables will be added to the ‘naughty’ list but since some automatic variables can be assigned to (by design), the process of determining the ones to warn against is still in process and subject to future improvement.

PossibleIncorrectUsageOfRedirectionOperator and PossibleIncorrectUsageOfAssignmentOperator

I have written those rules mainly for myself because as a C# programmer, I have to switch between different languages quite often and it happened to me and my colleagues quite often that we forgot simple syntax and were using e.g. if ($a > $b) when in fact what we meant was if ($a -gt $b) and similar for the ambiguity of the assignment operator = that can easily be used by accident instead of the equality operator that was probably intended.
Since this only applies to if/elseif/while/do-while statements, I could limit the search scope for it.
To avoid false positives, a lot of intelligent logic went into it.
For example, the rule is clever enough to know that if ($a = Get-Something) is assignment by design as this is a common coding pattern and therefore excluded from this rule.
I received some interesting feedback from the community and because PSSA does not support suppression on a per line basis at the moment, the rule offers implicit suppression in CLANG style whereby wrapping the expression in extra parenthesis tells the rule that the assignment is by design.
Thanks for this idea, which came from the community by @imfrancisd

AvoidTrailingWhiteSpace

This rule was implemented by the well known community member @dlwyatt and really does what it says on the tin.
The idea behind this was especially to prevent problems that can be caused by whitespace after the backtick.
Personally, I have the following setting in my settings.json for VSCode file that trims trailing whitespace automatically upon saving the file.

"": { "files.trimTrailingWhitespace": true }, AvoidUsingCmdletAliases

This rule is not new but a new feature has been added:
If one types a command like e.g. ‘verb’ and PowerShell cannot find it, it will try to add a ‘Get-‘ to the beginning of it and search again.
This feature was already present in PowerShell v1 by the way.
However, although ‘service’ might work on Windows, but on Linux ‘service’ is a native binary that PowerShell would call.
Therefore it is not only the implicit aliasing that makes it dangerous to omit ‘Get-‘, but also the ambiguity on different operating systems that can cause undesired behavior.
The rule is intelligent enough to check if the native binary is present on the given OS and therefore warns when using ‘service’ on Windows only.

Miscellaneous engine improvements and fixes

A lot of fixes for thrown exception, false positives, false negatives, etc. are part of this release as well.
Some are notable:

  • The PowerShell extension of VSCode uses PowerShellEditorServices, which in turn calls into PSScriptAnalyzer for displaying the warnings using squiggles and also uses its formatting capabilities (shortcut: Ctrl+K+F on the selection).
    There was one bug whereby if a comment was at the end of e.g. an if statement and the statement got changed to have the brace on the same line, the formatter placed the comment before the brace, which resulted in invalid syntax.
    This is fixed now.
    The PSUseConsistentWhiteSpace was also tweaked to take unary operators into account to have formatting that naturally looks better to humans rather than having a strict rule.
  • The engine is now being built using the .Net Core SDK version 2 and targets .Net Standard 2.0 for PowerShell Core builds.
    The used referenced for the PowerShell Parser also got updated to the latest version or the corresponding reference assemblies for Windows PowerShell, which highly improved the behaviour of PSScriptAnalyzer on PowerShell 3.
  • Various parsing issues existed with the -Settings parameter when it was not a string that was already resolved.
    This got fixed and should now work in any scenario.
  • PSSA has a UseCompatibleCmdlet rule and command data files are now present for all versions of Windows PowerShell and even OS specific for PowerShell Core 6.0.2.
    In effect the rule allows you to get warnings when calling cmdlets that are not present in the chosen PowerShell versions.
    More improvements to analyse type usage as well is planned.
  • The PSUseDeclaredVarsMoreThanAssignments rule has been a pet peeve for many in the past due to its many false positves.
    The rule received a few improvements.
    Some of its limitations (it is e.g. not aware of the scriptblock scope) are still present but overall, there should be less false positives.
  • Lots of internal build and packaging improvements were made and PSScriptAnalyzer pushed the envelope as far as using AppVeyor’s Ubuntu builds, which are currently in private Beta.
    Many thanks to @IlyaFinkelshteyn for allowing us to use it and the great support.
    We are now testing against PowerShell 4, 5.1 and 6.0 (Windows and Ubuntu) in CI.
  • Many community members added documentation fixes, thank you all for that!
  • Parser errors are now returned as diagnostic messages
  • Using ScriptAnalyzer with PowerShell Core requires at least version 6.0.2

Enjoy the new release and let us know how you find it.
PSScriptAnalyzer is also open to PRs if you want to add features or fix something.
Let me know if there are other PSScriptAnalyzer topics that you would like me to write about, such as e.g. custom rules or PSScriptAnalyzer setting files and VSCode integration.

Christopher Bergmeister
PSScriptAnalyzer Maintainer

Categories: Latest Microsoft News

Detecting unconscious bias in models, with R

Latest Microsoft Data Platform News - Thu, 06/14/2018 - 12:18

There's growing awareness that the data we collect, and in particular the variables we include as factors in our predictive models, can lead to unwanted bias in outcomes: from loan applications, to law enforcement, and in many other areas. In some instances, such bias is even directly regulated by laws like the Fair Housing Act in the US. But even if we explicitly remove "obvious" variables like sex, age or ethnicity from predictive models, unconscious bias might still be a factor in our predictions as a result of highly-correlated proxy variables that are included in our model.

As a result, we need to be aware of the biases in our model and take steps to address them. For an excellent general overview of the topic, I highly recommend watching the recent presentation by Rachel Thomas, "Analyzing and Preventing Bias in ML". And for a practical demonstration of one way you can go about detecting proxy bias in R, take a look at the vignette created by my colleague Paige Bailey for the ROpenSci conference, "Ethical Machine Learning: Spotting and Preventing Proxy Bias". 

The vignette details general principles you can follow to identify proxy bias in an analysis, in the context of a case study analyzed using R. The case study considers data and a predictive model that might be used by a bank manager to determine the creditworthiness of a loan applicant. Even though race was not explicitly included in the adaptive boosting model (from the C5.0 package), the predictions are still biased by race:


That's because zipcode, a variable highly associated with race, was included in the model. Read the complete vignette linked below to see how Paige modified the model to ameliorate that bias, while still maintaining its predictive power. All of the associated R code is available in the iPython Notebook.

GitHub (ropenscilabs): Ethical Machine Learning: Spotting and Preventing Proxy Bias (Paige Bailey)

Categories: Latest Microsoft News

Embedding Python in a C++ project with Visual Studio

Latest Microsoft Data Platform News - Thu, 06/14/2018 - 07:00

Watch the video version of this post on VS Toolbox

Let's get started

In this post, we're going to walk through a sample project that demonstrates scripting a C++ application with Python using CPython, PyBind11 and Visual Studio 2017. The sample code is available at github.com/zooba/ogre3d-python-embed and setup instructions are below.

Ogre3d is an open-source game engine written in C++ that has been used in games such as Hob and Torchlight 2. Both the engine and its source code are freely available from their website. For this sample, we have taken one of their character animation demos and extended it with Python. Rather than using the keyboard to move the character around, we can use Python code to call into the C++ functions that control him.

To build and run this sample on your own machine, you will require Visual Studio 2017 with the Python workload, the Python Native Development option, and Python 3.6 32-bit. If you already have Visual Studio 2017, these can be added by launching "Visual Studio Installer" and modifying the existing install.

Note: When you install Python 3.6 32-bit through Visual Studio, it automatically includes debugging symbols. If you install it yourself, you will need to select "Customize installation" and include debugging symbols. If you have already installed it, you can use Programs and Features to modify your install and add debugging symbols.

Clone our repository using git clone --recurse-submodules https://github.com/zooba/ogre3d-python-embed.git or using Team Explorer in Visual Studio. There is a PowerShell script in the root of the repository called get_externals.ps1 that will download and extract the version of the Ogre3d and SDL2 runtimes needed, and will prompt if you are missing either Python 3.6 32-bit or the DirectX redistributables (you can download the latter here, but be aware that this is a very old installer that will offer to install a browser toolbar - feel free to deselect that).

Once everything is installed, open srcPythonCharacter.sln in Visual Studio 2017 and press Ctrl+F5 to build and run the sample. While running, the sample will capture your mouse cursor, but you can use Alt+Tab to switch to another window. We will do that next to look at some of the code.

In Visual Studio, open Solution Explorer and then open the following files. We will be looking at each in the next few sections.

  • ogre.pyi
  • ogre_module.h
  • SinbadCharacterController.h
  • ai.py
Modifying without recompiling

The Python module ai.py is where we define the behavior of Sinbad, our dancing ogre. Collapse the command definitions region by clicking the "-" symbol to the left of #region Command definitions and look at the SCRIPT variable. Each item in this list is the movement we want Sinbad to do, including the amount of time he should do it for. There are already some extra movements listed but commented out, so try uncommenting them or adding your own. You can do this while the demo is running in the background.

Once you've made changes, save this file, use Alt+Tab to go back to the running sample, and press F5. The F5 key will reload the script, and you will see your changes (or an error) immediately. Compare this to normal C++ development, where you would have had to stop running, modify the code, recompile (and wait!), start running again, and return to the same state you were previously.

This is possible because of CPython's support for embedding, and made simple by the powerful pybind11 library. Embedding allows you to host the Python runtime in any native application, on any platform and using any compiler supported by CPython. So rather than launching "python.exe" with a script, you can load python36.dll into your own application and use it directly.

It is very easy to make Python representations of your C++ classes with pybind11. Switch to the ogre_module.h file to see what we did for this sample.

This file defines the mapping between a Python module named "ogre" and the classes and functions we want to make available. Using the metaprogramming features added in C++11, the pybind11 library automatically generates code to do the type conversions necessary to make Python code transparently interact with your C++ code. If you switch back to ai.py, you will see that the ogre.CharacterController class is imported and used in Python code to call back into C.

But how can we be sure that it is really doing all this? It seems pretty magical, and barely enough work on our part to make a game engine suddenly support Python scripting. In the next section, we will look at the proof that it is doing what we claim.

Debugging Python and C++

If you've got the demo running, now is the time to exit it by clicking on the Stop Debugging button. Visual Studio is famous for its debugging features, and this one is pretty cool. When you installed the Python Native Development option, we included the ability to do mixed Python/C++ debugging, whether you're in a Python project or a C++ project. You can find information about doing this from Python project in our documentation, but in this case we are going to launch a C++ project with Python debugging enabled.

Find the Start debugging button on the toolbar. Depending on your settings and the file you have open, it be labelled "Start", "Local Windows Debugger", "Python/Native Debugging" or "Attach...". Clicking the small down arrow next to the button will display your options.

Select "Python/Native Debugging" here to make it your new default. All the settings required to make the sample correctly load are already configured for this project (see this page for the details), so you can now press F5 to launch the sample again, but this time with the debugger attached.

Open ai.py again and set a breakpoint in the on_frame function near the end of the file. This function is called for each frame, but normally returns quickly until it's time to run the next step in the script. So while the demo is running, sooner or later this function will be hit. When it is, you'll see a mixed call stack showing both Python and C++ frames. (In this screenshot, I've hidden external code, so you may see some extra frames from pybind11.)

As you press F11 to step through the code, you will jump between Python and C++ just as naturally as either language on its own, and anywhere you view a Python object you'll be able to see the regular Python view. Breakpoints set in native code or Python code will be hit, and switching up and down the call stack will let you view both kinds of frames.

Type hints for Python code

Finally, let's take another look at the Python code and how Visual Studio helps you be a productive developer. In general, when you have defined a Python class in C++ you are not going to get IntelliSense when editing the code using it. This is unfortunate, as IntelliSense is most useful when using code that does not have easily read sources, but we now have the ability to provide IntelliSense separately.

Open the ogre.pyi file. While it looks very much like Python code, it's actually a type stub file that exists solely for its type hints. We include class and function definitions, but no bodies. With function annotations, we can specify the expected types and the return type of each function, and we will also extract and present documentation strings.

As this file is named ogre.pyi, when the ai.py module imports the native ogre module, IntelliSense will import the type stub instead. Without the type stub, we would not be able to resolve the ogre module at all, as it is generated at runtime by pybind11 and there is no other way for us to find it.

Switch back to ai.py and find the on_frame function. As this is called from C++, we have no information about the arguments that are passed to it, so to get IntelliSense we use type annotations to specify the parameter types. If you start typing character. within this function then you will see all the members that were specified in the type stub.

While this is very convenient and necessary in some situations, most of the time we are able to show you good IntelliSense without needing type hints. If you hover over command in this function you will see all the possible command types; using Go To Definition (F12) on command.execute will take you to either of the implementations of that function; and even though there are no type hints on these functions, we are still able to provide all the completions on character through our type analysis engine. Type stub files are useful for when you want to provide users with IntelliSense for code that cannot be automatically analyzed, and annotations can fill in the gaps that occur when using complex or unusual Python code.

Summary

If you develop native applications that have frequently-changing logic or business rules, it is easy to move that logic from C or C++ into Python, where it can be easily modified and updated without needing to recompile or restart your application. The combination of Visual Studio 2017 with the official CPython releases and pybind11 is the most productive way to develop and debug all aspects of your hybrid C++ and Python project. Download the free Visual Studio 2017 Community to get started today.

Categories: Latest Microsoft News

MP Author Version 8.2

Latest Microsoft Server Management News - Thu, 06/14/2018 - 02:56

The following is a special guest blog from Silect

We are pleased to announce the General Availability of MP Author version 8.2. We’ve made lots of updates and improvements to our family of products for SCOM. Here’s a summary of the changes to MP Author:

  • Support for SCOM 1801
  • When sealing and deploying and MP, deploy the sealed MP, not the unsealed.
  • Remember the last SCOM version used when creating an MP, as it should be the default next time. Improve display and logging of the versions of reference MP that are loaded.
  • Groups can now be created without dynamic membership rules.
  • Improved impersonation when browsing remote registries.
  • When parsing scripts for class names, parameters, etc. allow both single and double quotes.
  • When displaying lists of targets, ensure the list shows abstract and singletons if needed.
  • Several dialogs and message boxes now allow you to not show them again (once you know what they are telling you, you can select a “Don’t Show This Again” check box.)
  • Discoveries can now be enabled for a group from within the wizard.
  • Alert, Event, Performance and State views can now be filtered by a group.
  • Editing event, performance, service or process rules/monitors will work correctly even if the machine being browsed doesn’t have the items previously selected.
  • Performance monitors now support two and three-state monitors using simple values, average values, delta values, or consecutive samples.
  • Script monitors and rules allowed overrides on the parameter values but did not use the override value. This has been corrected.
  • Setting a description for an element which does not have a display name failed. The new behaviour is to set the missing display name (to the actual name) at the same time.
  • Do not require hosted classes with no key properties to be singletons because the hosting class may have key properties.
  • Improve handling of reference MPs which don’t exist or are the wrong version.
  • Allow registry and WMI queries to work if they contain characters which are significant in XML (like < or >).
  • The script class wizard will no longer allow users to change the class name, if the class name was determined from the script (if they don’t match, the discovery will fail).
  • Numerous bug fixes, UI and performance improvements
Categories: Latest Microsoft News

DSC Resource Kit Release June 2018

Latest Microsoft Server Management News - Wed, 06/13/2018 - 15:22

We just released the DSC Resource Kit!

This is our biggest release yet!
It takes the records for the most merged pull requests in a release and the most modules we have ever released at once from GitHub!

This release includes updates to 27 DSC resource modules. In the past 6 weeks, 165 pull requests have been merged and 115 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • ActiveDirectoryCSDsc
  • AuditPolicyDsc
  • CertificateDsc
  • ComputerManagementDsc
  • DFSDsc
  • NetworkingDsc (previously xNetworking)
  • SecurityPolicyDsc
  • SharePointDsc
  • SqlServerDsc
  • xActiveDirectory
  • xBitlocker
  • xDatabase
  • xDhcpServer
  • xDismFeature
  • xDnsServer
  • xDscDiagnostics
  • xDSCResourceDesigner
  • xExchange
  • xHyper-V
  • xPowerShellExecutionPolicy
  • xPSDesiredStateConfiguration
  • xRemoteDesktopSessionHost
  • xSCSMA
  • xSystemSecurity
  • xTimeZone (deprecated since now included in ComputerManagementDsc)
  • xWebAdministration
  • xWinEventLog

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our last community call for the DSC Resource Kit was on June 6. A recording of our updates is available on YouTube here. Join us for the next call at 12PM (Pacific time) on July 18 to ask questions and give feedback about your experience with the DSC Resource Kit.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

Please see our documentation here for information on the support of these resource modules.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or Changelog.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes ActiveDirectoryCSDsc 3.0.0.0
  • Changed Assert-VerifiableMocks to be Assert-VerifiableMock to meet Pester standards.
  • Updated license year in LICENSE.MD and module manifest to 2018.
  • Removed requirement for Pester maximum version 4.0.8.
  • Added new resource EnrollmentPolicyWebService – see issue 43.
  • BREAKING CHANGE: New Key for AdcsCertificationAuthority, IsSingleInstance – see issue 47.
  • Added:
    • MSFT_xADCSOnlineResponder resource to install the Online Responder service.
  • Corrected filename of MSFT_AdcsCertificationAuthority integration test file.
AuditPolicyDsc 1.2.0.0
  • Moved auditpol call in the helper module to an external process to better control output
  • auditpol output is now converted to CSV to remove the need to parse the text output
  • All resources have been updated to use the new helper module functionality
  • Added the Ensure parameter default value of Present to the AuditPolicySubcategory resource Test-TargetResource function
CertificateDsc 4.1.0.0
  • PfxImport:
    • Changed so that PFX will be reimported if private key is not installed – fixes Issue 129.
    • Corrected to meet style guidelines.
    • Corrected path parameter description – fixes Issue 125.
    • Refactored to remove code duplication by creating Get-CertificateStorePath.
    • Improved unit tests to meet standards and provide better coverage.
    • Improved integration tests to meet standards and provide better coverage.
  • CertificateDsc.Common:
    • Corrected to meet style guidelines.
    • Added function Get-CertificateStorePath for generating Certificate Store path.
    • Remove false verbose message from Test-Thumbprint – fixes Issue 127.
  • CertReq:
    • Added detection for FIPS mode in Test-Thumbprint – fixes Issue 107.
ComputerManagementDsc 5.1.0.0
  • TimeZone:
  • Moved Test-Command from ComputerManagementDsc.ResourceHelper to ComputerManagementDsc.Common module to match what TimeZone requires. It was not exported in ComputerManagementDsc.ResourceHelper and not used.
DFSDsc 4.1.0.0
  • Added Hub and Spoke replication group example – fixes Issue 62.
  • Enabled PSSA rule violations to fail build – fixes Issue 320.
  • Allow null values in resource group members or folders – fixes Issue 27.
  • Added a CODE_OF_CONDUCT.md with the same content as in the README.md – fixes Issue 67.
NetworkingDsc
(previously xNetworking) 6.0.0.0
  • New Example 2-ConfigureSuffixSearchList.ps1 for multiple SuffixSearchList entries for resource DnsClientGlobalSetting.
  • BREAKING CHANGE:
    • Renamed xNetworking to NetworkingDsc – fixes Issue 119.
    • Changed all MSFT_xResourceName to MSFT_ResourceName.
    • Updated DSCResources, Examples, Modules and Tests with new naming.
    • Updated Year to 2018 in License and Manifest.
    • Updated README.md from xNetworking to NetworkingDsc.
  • MSFT_IPAddress:
    • Updated to allow setting multiple IP Addresses when one is already set – Fixes Issue 323
  • Corrected CHANGELOG.MD to report that issue with InterfaceAlias matching on Adapter description rather than Adapter Name was released in 5.7.0.0 rather than 5.6.0.0 – See Issue 315.
  • MSFT_WaitForNetworkTeam:
    • Added a new resource to set the wait for a network team to become “Up”.
  • MSFT_NetworkTeam:
    • Improved detection of environmemt for running network team integration tests.
  • MSFT_NetworkTeamInterface:
    • Improved detection of environmemt for running network team integration tests.
  • Added a CODE_OF_CONDUCT.md with the same content as in the README.md – fixes Issue 337.
SecurityPolicyDsc 2.3.0.0
  • Updated documentation.
    • Add example of applying Kerberos policies
    • Added hyper links to readme
  • Refactored the SID translation process to not throw a terminating error when called from Test-TargetResource
  • Updated verbose message during the SID transliation process to identiy the policy where an orphaned SID exists
SharePointDsc 2.3.0.0
      • Changes to SharePointDsc
        • Added a Branches section to the README.md with Codecov and build badges for both master and dev branch.
      • All Resources
        • Added information about the Resource Type in each ReadMe.md files.
      • SPFarm
        • Fixed issue where the resource throws an exception if the farm already exists and the server has been joined using the FQDN (issue 795)
      • SPTimerJobState
        • Fixed issue where the Set method for timerjobs deployed to multiple web applications failed.
      • SPTrustedIdentityTokenIssuerProviderRealms
        • Added the resource.
      • SPUserProfileServiceApp
        • Now supported specifying the host Managed path, and properly sets the host.
        • Changed error for running with Farm Account into being a warning
      • SPUserProfileSyncConnection
        • Added support for filtering disabled users
        • Fixed issue where UseSSL was set to true resulted in an error
        • Fixed issue where the connection was recreated when the name contained a dot (SP2016)
SqlServerDsc 11.3.0.0
  • Changes to SqlServerDsc
    • Moved decoration for integration test to resolve a breaking change in DscResource.Tests.
    • Activated the GitHub App Stale on the GitHub repository.
    • Added a CODE_OF_CONDUCT.md with the same content as in the README.md issue 939.
    • New resources:
    • Fix for issue 779 Paul Kelly (@prkelly)
xActiveDirectory 2.19.0.0
  • Changes to xActiveDirectory
    • Activated the GitHub App Stale on the GitHub repository.
    • The resources are now in alphabetical order in the README.md (issue 194).
    • Adding a Branches section to the README.md with Codecov badges for both master and dev branch (issue 192).
    • xADGroup no longer resets GroupScope and Category to default values (issue 183).
    • The helper function script file MSFT_xADCommon.ps1 was renamed to MSFT_xADCommon.psm1 to be a module script file instead. This makes it possible to report code coverage for the helper functions (issue 201).
xBitlocker 1.2.0.0
  • Converted appveyor.yml to install Pester from PSGallery instead of from Chocolatey.
  • Added Codecov support.
  • Updated appveyor.yml to use the one in template.
  • Added folders for future unit and integration tests.
  • Added Visual Studio Code formatting settings.
  • Added .gitignore file.
  • Added markdown lint rules.
  • Fixed encoding on README.md.
  • Added PowerShellVersion = "4.0", and updated copyright information, in the module manifest.
  • Fixed issue which caused Test to incorrectly succeed on fully decrypted volumes when correct Key Protectors were present (issue 13)
  • Fixed issue which caused xBLAutoBitlocker to incorrectly detect Fixed vs Removable volumes. (issue 11)
  • Fixed issue which made xBLAutoBitlocker unable to encrypt volumes with drive letters assigned. (issue 10)
  • Fixed an issue in CheckForPreReqs function where on Server Core the installation of the non existing Windows Feature “RSAT-Feature-Tools-BitLocker-RemoteAdminTool” was erroneously checked. (issue 8)
xDatabase 1.8.0.0
  • Added support for SQL Server 2017
  • xDBPackage now uses the shared function to identify the paths for the different SQL server versions
xDhcpServer 1.7.0.0
  • Changes to xDhcpServer
    • Updated year in LICENSE file.
    • Updated year in module manifest.
    • Added Codecov and status badges to README.md.
    • Update appveyor.yml to use the default template.
  • Added xDhcpServerOptionDefinition
xDismFeature 1.3.0.0
  • Added unit test
  • Fixed issue that Test-TargetResource always fails on non-English OS 11
xDnsServer 1.11.0.0
  • Changes to xDnsServer
    • Updated appveyor.yml to use the default template and add CodeCov support (issue 73).
    • Adding a Branches section to the README.md with Codecov badges for both master and dev branch (issue 73).
    • Updated description of resource module in README.md.
  • Added resource xDnsServerZoneAging. Claudio Spizzi (@claudiospizzi)
  • Changes to xDnsServerPrimaryZone
  • Changes to xDnsRecord
xDscDiagnostics 2.7.0.0
  • Fixed help formatting.
xDSCResourceDesigner 1.11.0.0
  • Added support for Codecov.
  • Fix Test-xDscSchema failing to call Remove-WmiObject on PowerShell Core. The cmdlet Remove-WmiObject was removed from the code, instead the temporary CIM class is now removed by using mofcomp.exe and the preprocessor command pragma deleteclass (issue 67).
xExchange 1.21.0.0
  • Added CHANGELOG.md file
  • Added .markdownlint.json file
  • Updated README.md and CHANGELOG.md files to respect MD009, MD0013 and MD032 rules
  • Added .MetaTestOptIn.json file
  • Updated appveyor.yml file
  • Added .codecov.yml file
  • Renamed Test folder to Tests
  • Updated README.md: Add codecov badges
  • Fixed PSSA required rules in:
    • xExchClientAccessServer.psm1
    • xExchInstall.psm1
    • xExchMaintenanceMode.psm1
    • TransportMaintenance.psm1
    • xExchTransportService.psm1
  • Fixed Validate Example files in:
    • ConfigureAutoMountPoints-FromCalculator.ps1
    • ConfigureAutoMountPoints-Manual.ps1
    • ConfigureDatabases-FromCalculator.ps1
    • InternetFacingSite.ps1
    • RegionalNamespaces.ps1
    • SingleNamespace.ps1
    • ConfigureVirtualDirectories.ps1
    • CreateAndConfigureDAG.ps1
    • EndToEndExample 1 to 10 files
    • JetstressAutomation
    • MaintenanceMode
    • PostInstallationConfiguration.ps1
    • InstallExchange.ps1
    • QuickStartTemplate.ps1
    • WaitForADPrep.ps1
  • Remove default value for Switch Parameter in TransportMaintenance.psm1 for functions:
    • Clear-DiscardEvent
    • LogIfRemain
    • Wait-EmptyEntriesCompletion
    • Update-EntriesTracker
    • Remove-CompletedEntriesFromHashtable
  • Fixed PSSA custom rules in:
    • xExchActiveSyncVirtualDirectory.psm1
    • xExchAntiMalwareScanning.psm1
    • xExchAutodiscoverVirtualDirectory.psm1
    • xExchAutoMountPoint.psm1
    • xExchClientAccessServer.psm1
    • xExchDatabaseAvailabilityGroup.psm1
    • xExchDatabaseAvailabilityGroupMember.psm1
    • xExchDatabaseAvailabilityGroupNetwork.psm1
    • xExchEcpVirtualDirectory.psm1
    • xExchEventLogLevel.psm1
    • xExchExchangeCertificate.psm1
    • xExchExchangeServer.psm1
    • xExchImapSettings.psm1
    • xExchInstall.psm1
    • xExchJetstress.psm1
    • xExchJetstressCleanup.psm1
    • xExchMailboxDatabase.psm1
    • xExchMailboxDatabaseCopy.psm1
    • xExchMailboxServer.psm1
    • xExchMailboxTransportService.psm1
    • xExchMaintenanceMode.psm1
    • xExchMapiVirtualDirectory.psm1
    • xExchOabVirtualDirectory.psm1
    • xExchOutlookAnywhere.psm1
    • xExchOwaVirtualDirectory.psm1
    • xExchPopSettings.psm1
    • xExchPowerShellVirtualDirectory.psm1
    • xExchReceiveConnector.psm1
    • xExchUMCallRouterSettings.psm1
    • xExchUMService.psm1
    • xExchWaitForADPrep.psm1
    • xExchWaitForDAG.psm1
    • xExchWaitForMailboxDatabase.psm1
    • xExchWebServicesVirtualDirectory.psm1
  • Updated xExchange.psd1
  • Added issue template file (ISSUE_TEMPLATE.md) for “New Issue” and pull request template file (PULL_REQUEST_TEMPLATE.md) for “New Pull Request”.
  • Fix issue Diagnostics.CodeAnalysis.SuppressMessageAttribute best practices
  • Renamed xExchangeCommon.psm1 to xExchangeHelper.psm1
  • Renamed the folder MISC (that contains the helper) to Modules
  • Added xExchangeHelper.psm1 in xExchange.psd1 (section NestedModules)
  • Removed all lines with Import-Module xExchangeCommon.psm1
  • Updated .MetaTestOptIn.json file with Custom Script Analyzer Rules
  • Added Integration, TestHelpers and Unit folder
  • Moved Data folder in Tests
  • Moved Integration tests to Integration folder
  • Moved Unit test to Unit folder
  • Renamed xEchange.Tests.Common.psm1 to xExchangeTestHelper.psm1
  • Renamed xEchangeCommon.Unit.Tests.ps1 to xExchangeCommon.Tests.ps1
  • Renamed function PrepTestDAG to Initialize-TestForDAG
  • Moved function Initialize-TestForDAG to xExchangeTestHelper.psm1
  • Fix error-level PS Script Analyzer rules for TransportMaintenance.psm1
xHyper-V 3.12.0.0
  • Changes to xHyper-V
    • Removed alignPropertyValuePairs from the Visual Studio Code default style formatting settings (issue 110).
xPowerShellExecutionPolicy 3.0.0.0 xPSDesiredStateConfiguration 8.3.0.0
  • Changes to xPSDesiredStateConfiguration
  • Changes to xWindowsProcess
    • Integration tests for this resource should no longer fail randomly. A timing issue made the tests fail in certain scenarios (issue 420).
  • Changes to xDSCWebService
    • Added the option to use a certificate based on it”s subject and template name instead of it”s thumbprint. Resolves issue 205.
    • xDSCWebService: Fixed an issue where Test-WebConfigModulesSetting would return $true when web.config contains a module and the desired state was for it to be absent. Resolves issue 418.
  • Updated the main DSCPullServerSetup readme to read easier, then updates the PowerShell comment based help for each function to follow normal help standards. James Pogran (@jpogran)
  • xRemoteFile: Remove progress bar for file download. This resolves issues 165 and 383 Claudio Spizzi (@claudiospizzi)
xRemoteDesktopSessionHost 1.6.0.0
  • xRDSessionCollectionConfiguration: Add support to configure UserProfileDisks on Windows Server 2016
xSCSMA 2.0.0.0
  • Added MSI install logging for MSFT_xSCSMARunbookWorkerServerSetup and MSFT_xSCSMARunbookWorkerServerSetup
  • Added missing -Port parameter argument for New-SmaRunbookWorkerDeployment in MSFT_xSCSMARunbookWorkerServerSetup
  • Fixed MSFT_xSCSMARunbookWorkerServerSetup and MSFT_xSCSMAWebServiceServerSetup using incorrect executable for version checking
  • Remove System Center Technical Preview 5 support. Close issue 18
  • Close issue 19 (always install self-signed certificate)
  • BREAKING CHANGE: change SendCEIPReports parameter to SendTelemetryReports. Close issue 20
  • Added description for new parameters at README.md
  • Fix return state of the current SendTelemetryReports
  • Fix syntax at source code
xSystemSecurity 1.4.0.0 xTimeZone 1.8.0.0
  • THIS MODULE HAS BEEN DEPRECATED. It will no longer be released. Please use the “TimeZone” resource in ComputerManagementDsc instead.
  • Fixed xTimeZone Examples link in README.md.
xWebAdministration 2.0.0.0
  • Changes to xWebAdministration
    • Moved file Codecov.yml that was added to the wrong path in previous release.
  • Updated xWebSite to include ability to manage custom logging fields. Reggie Gibson (@regedit32)
  • Updated xIISLogging to include ability to manage custom logging fields (issue 267). @ldillonel
  • BREAKING CHANGE: Updated xIisFeatureDelegation to be able to manage any configuration section. Reggie Gibson (@regedit32)
xWinEventLog 1.2.0.0
  • Converted appveyor.yml to install Pester from PSGallery instead of from Chocolatey.
  • Fix PSSA errors.
How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available starting in WMF 5.0) to find modules with DSC Resources:

# To list all modules that tagged as DSCResourceKit Find-Module -Tag DSCResourceKit # To list all DSC resources from all sources Find-DscResource

Please note only those modules released by the PowerShell Team are currently considered part of the ‘DSC Resource Kit’ regardless of the presence of the ‘DSC Resource Kit’ tag in the PowerShell Gallery.

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the xCertificate module, go to:
https://github.com/PowerShell/xCertificate.

All DSC modules are also listed as submodules of the DscResources repository in the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Keim
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

Categories: Latest Microsoft News

Hotfix for Microsoft R Open 3.5.0 on Linux

Latest Microsoft Data Platform News - Wed, 06/13/2018 - 12:50

On Monday, we learned about a serious issue with the installer for Microsoft R Open on Linux-based systems. (Thanks to Norbert Preining for reporting the problem.) The issue was that the installation and de-installation scripts would modify the system shell, and did not use the standard practices to create and restore symlinks for system applications.

The Microsoft R team developed a solution the problem with the help of some Debian experts at Microsoft, and last night issued a hotfix for Microsoft R Open 3.5.0 which is now available for download. With this fix, the MRO installer no longer relinks /bin/sh to /bin/bash, and instead uses dpkg-divert for Debian-based platforms and update-alternatives for RPM-based platforms. We will also request a discussion with the Debian maintainers of R to further review our installation process. Finally, with the next release — MRO 3.5.1, scheduled for August 9 — we will also include the setup code (including the installation scripts) in the MRO GitHub repository for everybody to inspect and give feedback on. 

You can find more details about the issue, the how it was resolved, and the processes we have put in place to make sure it doesn't happen again in the blog post linked below.

Microsoft Machine Learning Server blog: How we are fixing our installer for Microsoft R Open on Linux

 

Categories: Latest Microsoft News

Unified policies with Cloud App Security and the Microsoft Data Classification Service

Latest Microsoft Server Management News - Wed, 06/13/2018 - 10:00

Microsoft Cloud App Security now integrates with the Microsoft Data Classification Service to create a consistent policy creation experience across Office 365, Azure Information Protection and Microsoft Cloud App Security. Find out how this allows your teams responsible for data security to leverage existing processes and apply them more broadly.

The need to protect your data

Organizations today focus heavily on cloud-run solutions, whether to increase employee productivity or to drive other efficiencies across the business. For the majority of these organizations, data is their most valuable corporate asset and to operate successfully, data must be ubiquitous.

Thats why companies invest heavily in information protection services to ensure secure handling and sharing of their data, without slowing down the business. Data classification can help organizations manage and monitor the usage and sharing of sensitive information such as personal data, financial data, or intellectual property. Whether a user acts with malicious intent or employees simply arent familiar with existing processes for information protection, both can contribute to data loss or exposure.

An integrated experience with the Microsoft Data Classification Service

Organizations invest a lot of time to determine which data can be shared and how – across and outside your organization. Microsoft understands how important it is to make the most of your time and thought investments and enable you to benefit from a more holistic, cross-service paradigm.

Thats why Microsoft Cloud App Security is now natively integrated with the Microsoft Data Classification Service to help classify the files in all of your cloud apps.

It provides a consistent information protection experience across Office 365, Azure Information Protection and Microsoft Cloud App Security (MCAS) and allows you to extend your data classification efforts to those third-party cloud apps that are protected by MCAS, leveraging the decisions you already made across an even greater number of apps.

With no additional configuration required, when creating a data loss prevention policy for your files in Microsoft Cloud App Security, you will automatically have the option to set the Inspection method to use the Microsoft Data Classification Service.

Create a policy and select the new Data Classification Service as the Inspection method

You can use the default sensitive information types as well as custom sensitive information types (which support complex patterns with Regex, keywords and large dictionary) that you may have already created in Office 365, and reuse them to define what happens to files protected by Microsoft Cloud App Security.

Select default sensitive information types or create custom ones to meet your data classification needs

Setting these policies in Microsoft Cloud App Security enables you to easily extend the strength of the Office 365 DLP capabilities to all your other sanctioned cloud apps and protect the data stored within them with the full toolset provided to you by Microsoft Cloud App Security such as the ability to automatically apply AIP labels and the ability to control sharing permissions.

This is the first step in creating a simplified information protection experience. Later this year we will release an experience that provides a central location to create all of your policies and apply them across all of your apps, on-premises and the cloud.

If you already protect your cloud apps with Microsoft Cloud App Security, this feature is now available in your tenant*. If you dont work with Microsoft Cloud App Security yet, this is a great opportunity to start a free trial and get started today by gaining visibility into your cloud apps and services, leveraging our sophisticated analytics to identify and combat cyber threats and control how your data travels.

Provide feedback and learn more

We love hearing your feedback. Let us know what you think in the Microsoft Cloud App Security Tech Community.

For more detailed information about this new capability, as well as a step-by-step guide for how to setup DLP policies using the Microsoft Data Classification service, please visit our technical documentation website.

*Deployment limitations: The Data Classification Service (DCS) is currently only available for the following Office 365 tenant locations: United States, Europe excluding France. We are working with the DCS team to deploy the service to additional regions and will update the list as more become available.

Categories: Latest Microsoft News

Hallelujah! Azure AD delegated application management roles are in public preview!

Latest Microsoft Server Management News - Wed, 06/13/2018 - 09:59

Howdy folks,

Today is a big day. I’m bouncing up and down at my PC as I type this because I’m just so happy to announce the public preview of our new delegated app management roles. If you have granted people the Global Administrator role for things like configuring enterprise applications, you can now move them to this lesser privileged role. Doing so will help improve your security posture and reduce the potential for unfortunate mistakes.

Additionally, we’re adding support for per-application ownership, which allows you to grant full management permissions on a per-application basis.

And lastly, we’re introducing a role that allows you to selectively grant people the ability to create application registrations. Read on for more details about each of these new permissions options!

Application administrator roles as an alternative to global administrator

Use the following roles to grant people access to manage all your directory’s applications without granting all other unrelated and powerful permissions included in the global administrator role.

  • Application Administrator: This role provides the ability to manage all applications in the directory, including registrations, SSO settings, user and group assignments and licensing, Application Proxy settings, and consent. It does not grant the ability to manage conditional access.
  • Cloud Application Administrator: This role grants all the abilities of the Application Administrator, except it does not grant access to Application Proxy settings (no on-premises access).

You can assign these new roles in the Azure AD portal, on the Directory roles tab of the user profile blade, or in Azure AD Privileged Identity Management.

Read more about the application administrator roles, including more specifics on permissions.

Granting ownership access to manage individual enterprise applications

We now support ownership for enterprise applications so you can do even finer grained delegation if you want. This complements the existing support for assigning application registration owners.

Ownership is assigned on a per-enterprise application basis in the enterprise apps blade. The benefit is owners can manage only the enterprise applications they own. For example, you can assign an owner for the Salesforce application, and that owner can manage access to and configuration for Salesforce, and no other applications. An enterprise application can have many owners, and a user can be the owner for many enterprise applications.

  • Enterprise Application Owner: This role grants the ability to manage ‘owned’ enterprise applications, including SSO settings, user and group assignments, and adding additional owners. It does not grant the ability to manage Application Proxy settings or conditional access.
  • Application Registration Owner: This role was previously available and grants the ability to manage ‘owned’ application registrations, including the application manifest and adding additional owners.

You can assign an enterprise application owner in the Azure AD portal, on the Owners tab of the enterprise applications blade.

You can learn more about enterprise application ownership here.

Selectively allowing people to create application registrations

By default, all users can create application registrations. You can disable this by setting “Users can register applications” to No. Starting today, using the new Application Developer role, you can selectively grant back the ability to create application registrations to people as needed.

  • Application Developer: This role grants the ability to create application registrations when the ‘Users can register applications’ switch is set to No. Application Developers can also consent for themselves when the ‘users can consent to applications accessing company data on their behalf’ switch is set to No. When an Application Developer creates a new application registration, they are automatically added as the first owner.

You can assign the Application Developer role in the Azure AD portal, on the Directory roles tab of the user profile blade, or in Azure AD Privileged Identity Management.

As always, we’d love to hear your feedback, thoughts, and suggestions! Feel free to share with us on the Azure AD administrative roles forum, or leave comments below. We look forward to hearing from you.

Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Categories: Latest Microsoft News

Oracle Launches Internet Intelligence Map Providing a Unique View into the Global Internet

Latest Oracle Press Releases - Wed, 06/13/2018 - 06:25
Press Release Oracle Launches Internet Intelligence Map Providing a Unique View into the Global Internet Free dashboard delivers insight into the impact of internet disruptions

VELOCITY, San Jose, Calif.—Jun 13, 2018

Oracle (NYSE:ORCL) today announced availability of the Internet Intelligence Map, providing users with a simple, graphical way to track the health of the internet and gain insight into the impact of events such as natural disasters or state-imposed interruptions. The map is part of Oracle’s Internet Intelligence initiative, which provides insight and analysis on the state of global internet infrastructure. To access the free Internet Intelligence Map, visit internetintel.oracle.com.

For more than a decade, members of Oracle’s Internet Intelligence team have broken some of the biggest stories about the internet. From BGP hijacks to submarine cable breaks, Oracle’s Internet Intelligence team frequently publishes objective data and analysis that informs public understanding of the technical underpinnings of the internet and its effects on topics like geopolitics and e-commerce. With today’s news, Oracle is now making core analytic capabilities available to everyone via the Internet Intelligence Map. Using one of the world’s most comprehensive internet performance data sets and backed by years of research and analytics, Oracle has developed the premier resource and authority for reliable information on the functioning of the internet.

“The internet is the world’s most important network, yet it is incredibly volatile. Disruptions on the internet can affect companies, governments, and network operators in profound ways,” said Kyle York, vice president of product strategy for Oracle Cloud Infrastructure and the general manager for Oracle’s Dyn Global Business Unit. “As a result, all of these stakeholders need better visibility into the health of the global internet. With this offering, we are delivering on our commitment to making it a better, more stable experience for all who rely on it.”

The Internet Intelligence Map presents country-level connectivity statistics based on traceroutes, BGP, and DNS query volumes on a single dashboard. By presenting these three dimensions of internet connectivity side-by-side, users can investigate the impact of an issue on internet connectivity worldwide.

“It’s important to have a global view of the internet in order to understand how external events prevent users from reaching your web-based applications and services. It is only when you have this insight that you can work around those issues to improve availability and performance,” said Jim Davis, Founder and Principal Analyst of Edge Research Group.

The Internet Intelligence Map is just one of many advanced awareness and visibility tools that help Oracle improve the experience of the cloud by making it better and more reliable every day. This offering is powered by Oracle Cloud Infrastructure, which offers a set of core infrastructure services to provide customers the ability to run any workload in the cloud. Only Oracle Cloud Infrastructure provides the compute, storage, networking, and edge services necessary to deliver the end-to-end performance required of today's modern enterprise.

Contact Info Danielle Tarp
Oracle
+1.408.921.7063
danielle.tarp@oracle.com Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure combines the benefits of public cloud (on-demand, self-service, scalability, pay-for-use) with those benefits associated with on-premises environments (governance, predictability, control) into a single offering. Oracle Cloud Infrastructure takes advantage of a high-scale, high-bandwidth network that connects cloud servers to high-performance local, block, and object storage to deliver a cloud platform that yields the highest performance for traditional and distributed applications, as well as highly available databases. With the acquisitions of Dyn and Zenedge, Oracle Cloud Infrastructure extended its offering to include Dyn’s best-in-class DNS and email delivery solutions and Zenedge’s next-generation Web Application Firewall (WAF) and Distributed Denial of Service (DDoS) capabilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • +1.408.921.7063

Nicole Maloney

  • +1.650.506.0806

Follow Oracle Corporate

Categories: Latest Oracle News

New Oracle Autonomous Cloud Services Ease Mobile Development, Data Integration

Latest Oracle Press Releases - Wed, 06/13/2018 - 05:00
Press Release New Oracle Autonomous Cloud Services Ease Mobile Development, Data Integration AI-based PaaS services cut costs and speed development of chatbots, data integration, and API management

Redwood Shores, Calif.—Jun 13, 2018

Oracle (NYSE: ORCL) today announced the availability of its next-generation Oracle Cloud Platform services featuring built-in autonomous capabilities, including Oracle Mobile Cloud Enterprise, Oracle Data Integration Platform Cloud, and Oracle API Platform Cloud. With embedded artificial intelligence (AI) and machine learning, these platform services automate operational tasks to enable organizations to lower cost, reduce risk, accelerate innovation, and get predictive insights.

To continuously innovate while keeping pace with dynamic business environments, enterprises need secure, comprehensive, integrated cloud services to build new applications and run their most demanding workloads. Only Oracle’s cloud services can intelligently automate key operational functions, including tuning, patching, backups, and upgrades giving organizations more time to focus on strategic business activities, while delivering maximum performance, high availability, and critical security features.

“There is tremendous value for our customers in embedding AI and machine learning capabilities throughout our entire cloud portfolio,” said Amit Zavery, executive vice president of development, Oracle Cloud Platform. “Customers have embraced our vision for an autonomous enterprise. With Oracle’s autonomous platform services, organizations can capitalize on AI to reduce costs, speed innovation, and transform how they do business.”

Last year, Oracle outlined its vision for an autonomous enterprise, unveiling the world’s first Autonomous Database. Since then, the company announced it would extend autonomous capabilities to its entire cloud platform portfolio. Delivering on that promise, Oracle recently made available a number of autonomous platform services, including Oracle Autonomous Data Warehouse Cloud, Oracle Analytics Cloud, Oracle Integration Cloud, and Oracle Visual Builder. With today’s availability of another set of autonomous platform services, Oracle continues to build momentum. Later in 2018, Oracle plans to release more autonomous capabilities focused on Blockchain, security and management, and additional database workloads, including OLTP and NoSQL.

Mutua Madrid Open Develops MatchBot with Oracle Cloud Platform

Mutua Madrid Open became the first ATP World Tour Masters 1000 and Premier WTA tournament to incorporate an AI-equipped chatbot to improve communication with tennis fans. Implemented with Oracle Cloud Platform, the chatbot, named “MatchBot,” used AI to maintain natural conversations that provided fans with information on the event, players, and results, as well as details on hospitality services, discounts on merchandise, ticket sales, access, and parking.

“We wanted to position the Mutua Madrid Open as the tournament of the 21st century,” said Gerard Tsobanian, president and CEO of Mutua Madrid Open. “Development of the MatchBot positions us at the forefront of technology and innovation. With this new technology, we were able to provide visitors with an amazing experience—a pleasant, simpler, and faster way to get the information they wanted about the tournament.”

New Oracle Cloud Platform Services Featuring Built-in AI

By adding autonomous capabilities to its latest set of Platform Cloud services, Oracle helps organizations easily develop new innovative user experiences with chatbot and voice capabilities, enables business users to perform intelligent data integration tasks, and exposes business logic and data via robust API design/management.

Oracle Mobile Cloud Enterprise

Oracle Mobile Cloud Enterprise provides a complete, open, and proven enterprise platform to develop, deliver, analyze, and manage mobile apps and AI-powered chatbots that connect and extend any backend system in a secure, scalable manner. Learn more here.

  • Self-learning chatbots observe interactions, preferences to automate frequently performed actions.
  • Automated learning from conversations ensures higher accuracy in the machine learning for the smart bots, including seamless handoffs to human agents via trained AI algorithms.

     

  • Automatic generation of QnA chatbots extract knowledge from unstructured data by leveraging machine learning.
  Oracle Data Integration Platform Cloud

Oracle Data Integration Platform Cloud helps organizations make better and faster decisions by providing access to all data, and unlocking value from data faster through a combination of machine learning and artificial intelligence powered features that stream, migrate, transform, enrich, and govern data from anywhere. Learn more here.

  • Simplifies and automates creation of big data lakes and data warehouses through one click self-defining tasks to deliver streaming or batch data, thereby improving standardization and efficiency for big data projects.
  • Delivers self-optimizing data pipelines for rapid data delivery into Oracle Autonomous Data Warehouse Cloud.
  • Enables trusted, governed cloud-based analytics through system-guided data sharing between SaaS, on-premises, and hybrid business applications.
  • Speeds up self-service data preparation through machine assisted data curation and eliminates manual work when creating data pipelines.
  Oracle API Platform Cloud

Oracle API Platform Cloud supports agile API-first design and development, enabling hybrid deployment of the API gateway across Oracle Cloud, on-prem and 3rd party clouds.  It provides insight on KPIs covering every aspect of the API lifecycle, while employing the most up-to-date security protocols. Learn more here.

  • Continuously learns about usage patterns, and recommends plans allocation limits and configurations.
  • Based on other policy usage and configurations, recommends policies and policy configurations using predictive algorithms to API Managers.
  Oracle Developer Cloud

Oracle Developer Cloud is a complete development platform included with all Oracle Cloud services that automates software development and delivery cycles, and helps teams manage agile development processes. With an easy-to-use web interface and integration with popular development tools, it helps deliver better applications faster. Learn more here

  • Builds automation for multiple development languages and environments supporting a variety of popular build frameworks such as Maven, Ant, Gradle, npm, Grunt, Gulp, and Bower.
  • Tests automation with popular testing frameworks such as JUnit and Selenium enables automation of both logic and UI testing.
  • Environment provisioning automation via command line interfaces for Docker, Kubernetes, and Terraform.
  • Continuous integration automation through visual pipeline flows. Monitor live execution of pipelines and build job execution history from a central location.
Contact Info Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com Kristin Reeves
Blanc and Otus
+1.925.787.6744
kristin.reeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.925.787.6744

Follow Oracle Corporate

Categories: Latest Oracle News

How We Built (Rebuilt!) Intune into a Leading Globally Scaled Cloud Service

Latest Microsoft Server Management News - Tue, 06/12/2018 - 12:00

This is something we usually dont do here at Microsoft, but today I want to share a blog post (included below) that was written for internal purposes only.

Let me answer your first question: But why are you releasing this information?

Simply put, as I meet with customers navigating their own unique digital transformation, I often hear that IT teams (and their leaders) are struggling with how to scale their infrastructure or apps not just in the cloud but also in terms of what the cloud will demand from them during this process.

I deeply understand that feeling; I have been in those shoes.

My goal with the information below is to share how we built our own global cloud service and I hope some of this information can act as a blueprint in your efforts to build something of your own.

We architected Intune from the cloud and for the cloud and we ended up with the worlds leading mobility service and the only one with true cloud scale.

This architecture is the best way to enable our customers to achieve their goals, safeguard their data, and protect their intellectual property all without hitting any limits on productivity or scale.

It is a rare experience to get to work on something where you started from scratch (literally: File-New), and then get to operate and optimize what you have created to a high-scale cloud service that is servicing billions of transactions every day.

Working on Intune has been one of these rare and rewarding opportunities.

What stands out to me about this blog post is the genius of the engineers who did this work. Theres no other way to say it. As you look around at the massive (and always growing) mobile management marketplace today, Intune is unique in that it is a true cloud service. Other solutions claim that they have a cloud model, but, at their core, they are simply not built as a cloud service.

The Intune engineering team has become one of the teams at Microsoft that is now regularly sought-out and asked to present the how-tos of our journey to a high-scale cloud service. When we first started Intune, Azure was just a glimmer in the eyes of a few incredible engineers; because of this, the initial architecture was not built on Azure. As we saw the usage of Intune begin to really scale in 2015, we knew we had to rearchitect it to be a modern Azure service that was built on a cloud-based micro-service architecture.

This ended up being a monumental decision.

The reason this decision mattered so much is obvious in retrospect: As a service scales to 10s of millions, and then 100s of millions, and then billions of devices and apps a cloud-based architecture is the only thing that can deliver the reliability, scale, and quality you expect. Also, without a cloud-based architecture, our engineering teams couldnt respond fast enough to customer needs.

Since making this change, I cannot count the number of customers that have commented on the changes in reliability and performance they have seen, as well as the speed at which we are delivering new innovations.

This is all possible because of architecture. Architecture really, really matters.

This perspective on this architecture came about after many hundreds of hours spent examining other solutions before we decided to rebuild Intune. Before we began that work on Intune, we intensively analyzed and evaluated a number of competing solutions to see if we should buy one of them instead. None of the solutions we considered were cloud services none. Every single one was a traditional on-prem Client-Server product being hosted in a datacenter and called a cloud service.

That was not a path toward hyperscale.

One way you can easily tell if a product is a client-server model or a cloud service is if there is a version number. If you see something like Product X v8.3 then you immediately know its not a cloud service. There is no such thing as a version number in a cloud service when the service is being updated multiple times a day.

I am excited about the future of Intune because there are scenarios that only Cloud services can perform this means that the Intune engineering team are already delivering solutions for problems our competitors havent even discovered yet, and it means the expertise of this team will just keep accelerating to anticipate and meet our customers needs.

As an engineering leader, it is incredibly exciting to consider what this means for the services and tools well build for our customers, as well as the ways well be able to respond when those customers come to us with new projects and new challenges.

If you havent switched to Intune yet, take a few minutes to read this post and then lets get in touch.

 

 

Intunes Journey to a Highly Scalable Globally Distributed Cloud Service

 

Starting around the 2nd half of 2015, Intune, which is part of Enterprise Mobility + Security (EMS), had begun its journey as the fastest growing business in the history of Microsoft. We started seeing signs of this rapid business growth result in a corresponding rapid increase in back end operations at scale. At the same time, we were also innovating across various areas of our service within Intune, in Azure, and other dependent areas. Balancing the innovation and rapid growth in a very short time was an interesting and difficult challenge we faced in Intune. We had some mitigations in place, but we wanted to be ahead of the curve in terms of scale and performance, and this pace in growth was somewhat of a wake up call to accelerate our journey to become a more mature and scalable globally distributed cloud services. Over the next few months, we embarked on making significant changes in the way we architected, operated, and ran our services.

This blog is a 4-part series that will describe Intunes cloud services journey to become one of the most mature and scalable cloud service running on Azure. Today, we are one of the most mature services operating at high scale while constantly improving the 6 pillars of availability, reliability, performance, scale, security, and agility. The 4-part blog series is roughly divided into the following topics:nd half of 2015, Intune, which is part of Enterprise Mobility + Security (EMS), had begun its journey as the fastest growing business in the history of Microsoft. We started seeing signs of this rapid business growth result in a corresponding rapid increase in back end operations at scale. At the same time, we were also innovating across various areas of our service within Intune, in Azure, and other dependent areas.

Balancing the innovation and rapid growth in a very short time was an interesting and difficult challenge we faced in Intune. We had some mitigations in place, but we wanted to be ahead of the curve in terms of scale and performance, and this pace in growth was somewhat of a wake up call to accelerate our journey to become a more mature and scalable globally distributed cloud services. Over the next few months, we embarked on making significant changes in the way we architected, operated, and ran our services.

Architecture and background, and our rapid reactive actions to improve customer satisfaction and make a positive impact towards business growth.

  1. Our proactive measures and actions to prepare for immediate future growth.
  2. Actions to mature the service to be highly available and scalable and be on-par with other high scale world class services.
  3. Path towards a pioneering service setting an example for various high scale operations in distributed systems.

Each of the blogs will summarize the learnings, in addition to explaining the topic in short detail. The first of the 4-series is described below. The remaining 3 topics will be covered in future blogs in the coming months.

Background and Architecture

In 2015, Intunes composition was a combination of a set of services running on physical machines hosted in a private data center and a set of distributed services running on Azure. By 2018, all Intune services have moved to running on Azure. This and future blogs are focused only on the distributed services running on Azure. The migration of services running on physical machines to Azure is a different journey and perhaps a blog at some point in the future. The rest of this section focuses on the background and architecture as of 2015.

Global Architectural View

Intune s cloud services are built on top of Azure Service Fabric (ASF). All services are deployed to an ASF cluster consisting of a group of front-end (FE) and middle-tier (MT) nodes. The FE nodes are hosted on A4 sku (14 GB, 8 cores). The MT nodes are hosted on A7 sku (56 GB, 8 cores). A cluster is completely isolated and independent of other clusters they cannot access each other in any way or form as they are hosted in completely different subscriptions and data centers. There are 18 such ASF clusters throughout the world, spread across 3 regions North America (NA), Europe (EU), and Asia Pacific (AP). Each ASF cluster has an identical set of services deployed and perform identical functionality. These services consist of stateless and partitioned stateful services. Figure 1 shows the global architectural view.

Figure 1: Intunes Global Clusters (a.k.a. Azure Scale Unit, ASU) – Architecture View (2015)sku (14 GB, 8 cores). The MT nodes are hosted on A7 sku (56 GB, 8 cores). A cluster is completely isolated and independent of other clusters they cannot access each other in any way or form as they are hosted in completely different subscriptions and data centers. There are 18 such ASF clusters throughout the world, spread across 3 regions North America (NA), Europe (EU), and Asia Pacific (AP). Each ASF cluster has an identical set of services deployed and perform identical functionality. These services consist of stateless and partitioned stateful services. Figure 1 shows the global architectural view.

Cluster Architecture Drilldown

Within each cluster, we have 5000+ services running with a set of ~80 unique types of stateless microservices and ~40 unique types of stateful microservices. Stateless services run multiple instances on all FE nodes routed by Azure load balancer. Stateful services and some high value stateless services run on MT nodes. Stateful services are built on in-memory architecture that we built in-house as a No-SQL database (recall we are talking about 2015 here).

The stateful in-memory store we implemented is a combination of AVL trees and hash sets to allow extremely fast writes, gets, table scans, and secondary index searches. These stateful services are partitioned for scale out. Each partition has 5 replicas to handle availability. One of these 5 acts as the primary replica where all requests are handled. The remaining 4 are secondary replicas replicated from the primary. Some of our services require strong consistency for their data operations, which means we need a quorum of replicas in order to satisfy writes. In these scenarios, we prefer CP over AP in the CAP theorem, i.e., when a quorum of replicas is not available, we fail writes and hence loss of availability. Some of our scenarios are fine with eventual consistency, and we would benefit from AP over CP, but for simplicity sake, our initial architecture supported strong consistency across all services. So, at this point, we are CP.

ASF does a great job in many aspects, one of which is densely packing services across the cluster. The typical number of our processes running on each MT node ranges from 30-50 hosting multiple stateful replicas. It also does very well on handling all the complexities of managing and orchestrating replica failover and movement, performing rolling upgrades for all our service deployments, and load balancing across the cluster. If/when the primary replica dies, a secondary replica is automatically promoted to primary by ASF, and a new secondary replica is built to satisfy the 5 replica requirement. The new secondary replica initiates and completes a full memory to memory data transfer from the primary to the secondary. We also periodically backup data in an external persisted Azure table/blob storage with a 10-minute RPO to recover from cases where all replicas are lost in a disaster or partition loss. Figure 2 shows the cluster view. Figure 2 shows the cluster view.

Scale Figure 2: Intunes Azure Scale Unit (a.k.a. cluster or ASU) Architecture (2015)RPO to recover from cases where all replicas are lost in a disaster or partition loss. Figure 2 shows the cluster view.. Figure 2 shows the cluster view.

Issues

As mentioned earlier, with the rapid usage growth (approximately going from 3bn transactions to 7 bn per day), towards the end of 2015 and beginning of 2016, our back end services started seeing a corresponding huge increase in traffic. So, we started looking into devising tactical solutions to give immediate relief to handle any issues arising out of these growing pains.

Issue #1:

The very first thing we realized was that we needed proper telemetry and alerting. The scale at which we needed telemetry was also undergoing much rapid innovation from the underlying Azure infrastructure at this point, and we could not leverage it immediately due to the GA timings, etc. So, from a tactical point of view, we had to invent a few instrumentation/diagnostic solutions in a very quick manner, so we can get sufficient data to start mitigations. With the correct telemetry in place, we started gathering the data on what were the top most critical issues we should address to realize the biggest relief.

The quick investments in telemetry paid off in a big way. We were able to investigate and determine tactical solutions and iterate in a fast manner. The rest of the issues were all driven by this short term, but high impact investments in telemetry.

Issue #2:

Telemetry made us aware that some partitions were handling huge amounts of data. A single partition sometimes would store millions of objects and the transaction rates reached up to 6 billion per day across the clusters. Increased data meant increase in data transfer when secondary replicas needed to be built as and when any of the existing primary/secondary replicas died or had to be load balanced. The more the data, the more time it would take to build the secondary with associated memory and CPU costs.

Much of this time was due to serialization/deserialization of the data required to transfer between the replicas during rebuild. We were using data contract serializer, and after various perf investigations with many serializers, we settled on the change to using Avro. Avro gave us a 50% throughput and CPU improvement, and significantly reduced the time it took to rebuild. For example, for a 4 GB data transfer, our rebuilds which were taking up to 35 mins would complete in <= 20 minutes. This was not optimal, but we were looking at immediate relief and this solution helped us in that respect. I will share in my next blog how we reduced this time to complete in seconds from 20 minutes.

Issue #3:

The usage growth also brought in new traffic and search patterns for our algorithms that were not fully optimized to handle efficiently (CPU/memory wise). We designed efficient secondary index searching with AVL trees, however, for certain search patterns, we could be even more optimized. Our assumption was that the secondary index trees typically would have much smaller sizes compared to the main tree that do full table scans, and should meet all our needs. However, when we were looking at some of the high CPU issues, we noticed a traffic pattern that occasionally pegged the CPU for certain secondary index searching. Further analysis showed us that paging and order by searches with millions of objects in a secondary index can cause extremely high CPU and impact all services running on that node.

This was a great insight that we could immediately react to and design an alternative algorithm. For the paging and order by searches, we designed and implemented a max heap approach to replace the AVL tree. The time complexity for inserts and searches is an order of magnitude better for the max heap. Inserting 1M objects reduced our time from 5 seconds to 250 milliseconds, and we saw order by (basically sorting) improvements for 1M objects to go from 5 seconds to 1.5 seconds. Given the # of search queries that were performing these types of operations, this improvement resulted in a significant saving for our memory and CPU consumption in the cluster.saving for our memory and CPU consumption in the cluster.

Issue #4: Max heap approach to replace the AVL tree.

The time complexity for inserts and searches is an order of magnitude better for the max heap. Inserting 1M objects reduced our time from 5 seconds to 250 milliseconds, and we saw order by (basically sorting) improvements for 1M objects to go from 5 seconds to 1.5 seconds. Given the # of search queries that were performing these types of operations, this improvement resulted in a significant saving for our memory and CPU consumption in the cluster.saving for our memory and CPU consumption in the cluster.

A vast majority of all the growth impact was seen when we were performing deployments/upgrades. And these issues got further exacerbated when our FE and MT nodes got rebooted by Azure as part of its OS patching schedule. These nodes were rebooted upgrade domain (UD) by UD in a sequential manner, with a hard limit of 20 mins for each UD to be completely functional before moving on to the next UD upgrade.

There were 2 categories of issues that surfaced for us with the upgrades:

  • The replica count for stateful services was equal to the number of UDs (both were 5). So, when one UD was being upgraded, ASF had to move all the replicas from that UD to one of the other 4, while maintaining a variety of constraints such as proper distribution of replicas to maintain fault domain placements, primary/secondary not being on the same nodes, and variety of others. So, it required a fair amount of replica movement and density during upgrades. From issue #2 above, we knew some rebuilds could take up to 20 mins, which meant that secondary replicas could not be fully ready before the next UD got upgraded. The net effect of this was we lost quorum because sufficient number of replicas were not active to satisfy writes during upgrades. Figure 3 shows the effect of the replica density changes during upgrades. The steep increase from ~350 replicas to ~1000 replicas for one of our services is an example of the amount of rebuilding that was happening. Our immediate reaction was to bump up the SKU for the nodes to get some relief, but the underlying platform didnt support an in-place upgrade of the sku. Thus, we were required to failover to a new cluster, which meant we needed to do data migration. This would be a very complex procedure, so we dropped this idea. I will describe in one of the next blog posts how we overcame this limitation.
  • Intune and ASF teams performed deep analysis of this problem, and with the help of ASF team, we finally determined that the optimal configuration was using 4 replicas with 5 UDs, so that one UD is always available to take on additional replicas and avoid excessive movement of replicas. This provided a significant boost to our cluster stability, and the 3x delta replica density during upgrades dropped by 50-75%.

 

Figure 3: Replica Count and Density Impact During Upgrades

 

Finally, we also noticed that our cluster could be better balanced in terms of replica counts and memory consumption. Some nodes were highly utilized, while some nodes were almost idle. Obviously, this put undue pressure on the heavily loaded nodes when there was traffic spike or upgrades. Our solution was to implement load balancing metrics, reporting, and configuration in our clusters. The impact of this is best shown in Figures 4 and 5 blue lines indicate the balance after our improvements. The X-axis is the node name we use.

 

Figure 4: Per Node Replica Counts Before and After Load Balancing Improvements.

 

Figure 5: Per Node Memory Consumption Before and After Load Balancing Improvements.

 

Learnings
  • There are 4 top level learnings that this experience taught us that I believe are applicable to any large-scale cloud service:
  • Make telemetry and alerting one of the most critical parts of your design. Monitor the telemetry and alerting, iterate, and refine in pre-production environment before exposing the feature to your customers in production.
  • Know your dependencies. If your scale solution doesnt align to your dependent platform, all bets are off. For example, if your solution to scale out is to increase from 10 nodes to 500 nodes, ensure that the dependent platform (Azure or AWS or whatever it is) supports that increase, and the length of time it takes to do so. For example, if there is a limit on increasing it by few nodes at a time, you will need to adjust your reactive mechanism (alerts, etc.) to account for this delay. Similarly, another example is scale up. If your scale up solution is to do a sku upgrade, ensure that your dependent platform will support an in-place upgrade of a low performance sku to a high performance sku.
  • Continually validate your assumptions. Many cloud services and platforms are still evolving, and the assumptions you made just a few months back may not be valid any more in many respects including dependent platforms design/architecture, alternative available better/optimized solutions, deprecated features, etc. Part of this could be monitoring your core code paths for changes in traffic patterns and ensuring that the design/algorithms/implementation you put in place are still valid for their usage. Cloud service traffic usage patterns can and will change and a solution that was valid few months back may not be optimal anymore, and needs to be revisited/replaced with a more efficient one.
  • Make it a priority to do capacity planning. Determine how you can do predictive capacity analysis and ensure you review it at a regular cadence. This will make a difference between being reactive and pro-active for customer impacting scale issues.
Conclusion

Implementation and rollout of all the above solutions to production took us about 3-4 months. By April 2016, the results were highly encouraging. There was a vast improvement in our cluster stability, availability, and reliability. This was also felt by our customers who gave very positive comments and reactions to the improvements we had made in reliability and stability. This was a huge encouraging step, but we had all the learnings from above to take us forward to make it even better with further improvements. Our journey to a mature and scalable distributed cloud service had begun.

 

Categories: Latest Microsoft News

Windows Server 2008 SP2 servicing changes

Latest Microsoft Server Management News - Tue, 06/12/2018 - 10:00

This blog post was authored bySachin Goyal, Senior Program Manager, Windows Servicing & Delivery Team.

We are moving to a rollup model for Windows Server 2008 SP2. The initial preview of the monthly quality rollup will be released on Tuesday, August 21, 2018.

Windows Server 2008 SP2 will now follow a similar update servicing model as later Windows versions, bringing a more consistent and simplified servicing experience. For those of you who manage Windows updates within your organization, its important that you understand the choices that will be available.

Lets review what we will release each month following the initial preview of the monthly quality rollup release in August:

A security only quality update: Starting September 2018, this security only update will be released on Update Tuesday, commonly referred to as Patch Tuesday, the second Tuesday of the month.

A security monthly quality rollup: Starting September 2018, this monthly rollup will be released on Update Tuesday, also known as Patch Tuesday”, the second Tuesday of the month.

A preview of the monthly quality rollup: Starting August 2018, the preview rollup will be released on the third Tuesday of the month.

For more details, please refer to the Windows for IT Pros blog.

Internet Explorer updates
  • The monthly rollups will contain fixes for the Internet Explorer version 9 for Windows Server 2008 SP2. The security only, monthly rollup, and preview rollup will not install or upgrade to a version of Internet Explorer if not already present.
.NET Framework monthly rollup

These changes will simplify the updating of Windows Server 2008 SP2 computers, while also improving scanning and installation times, and providing flexibility depending on how you typically manage Windows updates today.

Categories: Latest Microsoft News

Managed Services

If you do not understand your computer systems or your organization has computer maintenance needs, contact us and we'll schedule a free consultation to offload the computer headaches from you.

View details »

Database Administration

Need some assistance with a database or database maintenance? We can help with that and anything else related to databases.

View details »

Virtualization

Not sure how this could help you? Let us explain the benefits of virtualization and be sure you are using your hardware properly.

View details »