Feed aggregator

PowerShell Core 6 Release Candidate

Latest Microsoft Server Management News - Fri, 11/17/2017 - 17:03

PowerShell Core 6 Release Candidate

Last year, we announced that PowerShell was not only Open Source, but also cross platform.  The finish line is in sight and we recently published the Release Candidate for PowerShell Core 6!

PowerShell Team ♥ Community

It has been an amazing experience for the team working with the community on PowerShell Core 6!  Being able to work openly and transparently meant that we could address customer concerns much more quickly than in the days of Windows PowerShell.  The contributions from the community have also enabled PowerShell Core 6 to be more agile and richer in capability, particularly for cmdlets.  In fact, the community has contributed just over half of the pull requests (PRs)!

This really shows the benefit of Open Source and an active community and we certainly wouldn’t have gotten as far as we have without such a passionate and helpful community!

Roadmap to General Availability and the Future

When the PowerShell Team started working through all the work required to publish a release, we also created a branch for the eventual PowerShell Core 6.0.0 final release.  There are still some issues and work items needed to be completed for the GA (General Availability) release.  General Available simply means a supported release (replaces the legacy term Release to Manufacturing!).

This means that any changes merged to the master branch will show up in the 6.1.0 release.  I encourage the community to continue to make contributions to PowerShell Core with the expectation that it will be part of 6.1.0 and not 6.0.0.  Only issues (and associated pull requests) approved for 6.0.0 GA with milestone set to `6.0.0-GA` will be taken for the 6.0.0 release.

If you find any deployment or adoption blockers, please open issues on GitHub (or up vote existing ones with a thumbs up) and mention me (using `@SteveL-MSFT`) so I will be notified and those issues will be triaged and a decision will be made if we need to make or take a fix before 6.0.0 GA.

We are currently targeting having the GA release on January 10th, 2018.

The first PowerShell Core 6.1.0 beta release will be published after PowerShell Core 6.0.0 GA and we plan to continue a 3 week cadence for beta releases.  Note that if you use the install-powershell.ps1 script to install daily builds, it will be from the master branch (aka 6.1.0) and not from our 6.0.0 Release Candidate or GA.

PowerShell Core 6 Support

PowerShell Core 6.0.0 will adopt the Microsoft Modern Lifecycle for support.  Essentially, this means that barring any critical security fixes, customer are expected to install the latest released version of PowerShell Core.  In general, if you find an issue, please open it on GitHub.  We’ll be providing more information on the specifics of this lifecycle and what it means for PowerShell Core soon.

Thanks to everyone for their support and contributions!

Steve Lee
Principal Engineer Manager
PowerShell Team

Categories: Latest Microsoft News

Because it’s Friday: Better living through chemistry

Latest Microsoft Data Platform News - Fri, 11/17/2017 - 14:20

This video is a compilation of some spectacular chemical reactions, with a few physics demonstrations thrown in for good measure. (But hey, chemistry is just applied physics, right?).

That's all from us here at the blog for this week. Have a great weekend, and we'll be back on Monday!

Categories: Latest Microsoft News

Highlights from the Connect(); conference

Latest Microsoft Data Platform News - Fri, 11/17/2017 - 13:44

Connect();, the annual Microsoft developer conference, is wrapping up now in New York. The conference was the venue for a number of major announcements and talks. Here are some highlights related to data science, machine learning, and artificial intelligence:

Lastly, I wanted to share this video presented at the conference from Stack Overflow. Keep an eye out for R community luminary David Robinson programming in R!

You can find more from the Connect conference, including on-demand replays of the talks and keynotes, at the link below.

Microsoft: Connect(); November 15-17, 2017

Categories: Latest Microsoft News

Released: Microsoft Kerberos Configuration Manager for SQL Server 4.0

Latest Microsoft Data Platform News - Fri, 11/17/2017 - 12:10

We are pleased to announce the latest generally-available (GA) of Microsoft Kerberos Configuration Manager for SQL Server.

Get it here: Download Microsoft Kerberos Configuration Manager for SQL Server

Why Kerberos?

Kerberos authentication provides a highly secure method to authenticate client and server entities (security principals) on a network. To use Kerberos authentication with SQL Server, a Service Principal Name (SPN) must be registered with Active Directory, which plays the role of the Key Distribution Center in a Windows domain. In addition, many customers also enable delegation for multi-tier applications using SQL Server. In such a setup, it may be difficult to troubleshoot the connectivity problems with SQL Server when Kerberos authentication fails.

Here are some additional reading materials for your reference.

Why use this tool?

The Kerberos Configuration Manager for SQL Server is a diagnostic tool that helps troubleshoot Kerberos related connectivity issues with SQL Server, SQL Server Reporting Services, and SQL Server Analysis Services. It can perform the following functions:

  • Gather information on OS and Microsoft SQL Server instances installed on a server.
  • Report on all SPN and delegation configurations and Always On Availability Group Listeners installed on a server.
  • Identify potential problems in SPNs and delegations.
  • Fix potential SPN problems.

This release (v4.0) adds support for Always On Availability Group Listeners.

Notes
  • Microsoft Kerberos Configuration Manager for SQL Server requires a user with permission to connect to the WMI service on any machine its connecting to. For more information, refer to Securing a Remote WMI Connection.
  • For Always On Availability Group Listeners discovery, run this tool from the owner node.
  • Also, if needed for troubleshooting, the Kerberos Configuration Manager for SQL Server creates a log file in %AppData%MicrosoftKerberosConfigMgr.
Categories: Latest Microsoft News

Update 1711 for Configuration Manager Technical Preview Branch – Available Now!

Latest Microsoft Server Management News - Fri, 11/17/2017 - 07:00

Hello everyone! We are happy to let you know that update 1711 for the Technical Preview Branch of System Center Configuration Manager has been released. Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This months new preview features include:

  • Run Task Sequence step – This release includes improvements to the new Run Task Sequence step, which runs another task sequence creating a parent-child relationship between task sequences. See the online documentation for more details about the improvements. This is currently the feature with the third highest number of votes on UserVoice
  • Allow user interaction when installing applications as system – Now users can interact with an application installation user interface in system context even during a task sequence. This feature is a popular request on UserVoice.

This release also includes the following improvement for customers using System Center Configuration Manager connected with Microsoft Intune to manage mobile devices:

  • New options for compliance policies – You can now configure new options for compliance policies for Windows 10 devices. The new settings include policies for Firewall, User Account Control, Windows Defender Antivirus, and OS build versioning.

Update 1711 for Technical Preview Branch is available in the Configuration Manager console. For new installations please use the 1703 baseline version of Configuration Manager Technical Preview Branch available on TechNet Evaluation Center.

We would love to hear your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If theres a new feature or enhancement you want us to consider for future updates, please use the Configuration Manager UserVoice site.

Thanks,

The System Center Configuration Manager team

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews

Try the System Center Configuration Manager Technical Preview Branch

Documentation for System Center Configuration Manager

System Center Configuration Manager Forums

System Center Configuration Manager Support

Download the Configuration Manager Support Center

 

Categories: Latest Microsoft News

The City of Chicago uses R to issue beach safety alerts

Latest Microsoft Data Platform News - Thu, 11/16/2017 - 14:27

Among the many interesting talks I saw a the Domino Data Science Pop-Up in Chicago earlier this week was the presentation by Gene Lynes and Nick Lucius from the City of Chicago. The City of Chicago Tech Plan encourages smart communities and open government, and as part of that initiative the city has undertaken dozens of open-source, open-data projects in areas such as food safety inspections, preventing the spread of West Nile virus, and keeping sidewalks clear of snow. 

This talk was on the Clear Water initiative, a project to monitor the water quality of Chicago's many public beaches on Lake Michigan, and to issue safety alerts (or in serious cases, beach closures) when E Coli levels in the water get too high. The problem is that E Coli levels can change rapidly: water levels can be normal for weeks, and then spike for a single day. But traditional culture tests take many days to return results, and while rapid DNA-based tests do exist, it's not possible conduct these tests daily at every beach.

The solution is to build a predictive model, which uses meteorological data and rapid DNA tests for some beaches, combined with historical (culture-based) evaluations of water quality, to predict E Coli levels at all beaches every day. The analysis is performed using R (you can find the R code at this Github repository).

The analysis was developed in conjunction with citizen scientists at Chi Hack Night and statisticians from DePaul University. In 2017, the model was piloted in production in Chicago to issue beach safety alerts and to create a live map of beach water quality. This new R-based model predicted 60 additional occurrences of poor water quality, compared with the process used in prior years.

Still, water quality is hard to predict: once you have the slower test data and an actual result to compare with, that's an accuracy rate of 38%, with fewer than 2% false alarms. (The city plans to use clustering techniques to further improve that number.) That model uses rapid DNA testing at five beaches to predict all beaches along Lake Michigan. A Shiny app (linked below) lets you explore the impact of testing at a different set of beaches, and adjusting the false positive rate, on the overall accuracy of the model.

You can find more details about the City of Chicago Clear Water initiative at the link below.

City of Chicago: Clear Water

Categories: Latest Microsoft News

SQL Server 2017: A proven leader in database performance

Latest Microsoft Data Platform News - Thu, 11/16/2017 - 13:00

This post was authored by Bob Ward, Principal Architect, and Jamie Reding, Senior Program Manager and Performance Architect, Microsoft Database Systems Group.

SQL Server continues to be a proven leader in database performance for both analytic and OLTP workloads. SQL Server 2017 is fast, built-in with capabilities and features such as Columnstore indexes to accelerate analytic performance and Automatic Tuning and Adaptive Query Processing to keep your database application at peak speed.

Recently, Hewlett Packard Enterprise (HPE) announced a new world record TPC-H 10TB benchmark result¹ using SQL Server 2017 and Windows Server 2016 on their new DL580 Gen10 Server. This new amazing result at 1,479,748 Composite Query-per-Hour (QphH) was achieved at price/performance of .95 USD per QphH continuing to show SQL Server’s leadership in price and performance.

HPE also announced the first TPC-H 3TB result² on a 2-socket system using SQL Server 2017 and Windows Server 2016 with their DL380 Gen Server. They achieved a stellar 1,014,374 QphH on only 2-sockets. These results continue to show how powerful SQL Server can be to handle your analytic query workloads including data warehouses.

SQL Server also is a proven leader for OLTP workloads. Lenovo recently announced a new world-record TPC-E benchmark result³ using SQL Server 2017 and Windows Server 2016. This is now the #1 TPC-E result in both performance at 11,357 tpsE and price/performance at 98.83 USD per tpsE for systems with 4 sockets or more. This result was achieved on Lenovo’s ThinkSystem SR950 server using 4 sockets at 112 cores which represents a 25% performance gain from the previous 4 socket result with 16% more cores.

SQL Server 2017 is the world leader in TPC-E and TPC-H performance, price, and value and continues to demonstrate it is one of the fastest databases on the planet, in your cloud or ours.

References
  • ¹ 10TB TPC-H non-clustered result as of November 9th, 2017.
  • ² 3TB TPC-H non-clustered result as of November 9th, 2017.
  • ³ TPC-E benchmark result as of November 9th, 2017.
Categories: Latest Microsoft News

On-Demand Webinar – AI Development Using Data Science VMs (DSVM), Deep Learning VMs (DLVM) & Azure Batch AI

Latest Microsoft Data Platform News - Thu, 11/16/2017 - 09:00

This post is authored by Barnam Bora, Program Manager in the Cloud AI group at Microsoft.

Microsoft’s Data Science Virtual Machines (DSVM) and Deep Learning Virtual Machines (DLVM) are a family of popular VM images in Windows Server and Linux flavors that are published on the Azure Marketplace. They have a curated but broad set of pre-configured machine learning and data science tools including pre-loaded samples. DSVM and DLVM are configured and tested to work seamlessly with a plethora of services available on the Microsoft Azure cloud, and they enable a wide array of data analytics scenarios that are being used by many organizations across the globe.

We recently hosted a webinar covering the workflow of building ML and AI -powered solutions in Azure using DSVM, DLVM and related services such as Azure Batch AI and Azure Machine Learning Model Management. The webinar video is available from the link below (requires registration with Microsoft) and more information about the webinar are in the sections that follow.


WATCH: AI development using DSVM/DLVM

Scenarios Covered in the Webinar
Single GPU Node AI Model Training

DSVM and DLVM are great tools to develop, test and deploy AI models and solutions. Data scientists and developers can use the capabilities provided in DSVM/DLVM to start developing AI solutions on a single node/machine. Once initial development is complete and there’s a need to train on a much larger dataset, it’s remarkably simple to scale out from a single node to a multi-node scalable cluster for parallelized training of models using the Azure Batch AI service.

Scale Out AI Model Training with DSVM and DLVM on Azure Batch AI

This section is a detailed discussion and demonstration of using the DSVM/DLVM for single node development and testing of AI Models and then scaling out to a multi-node cluster using the Azure Batch AI Service. The dataset used for this sample is the CIFAR-10 Dataset.


AI Model Deployment & Management using Azure Machine Learning Model Management

In this section we discuss the workflow for taking a trained model and packaging and deploying it using Azure Machine Learning Model Management to build operational scoring pipelines.

 

Other Sections Covered in the Webinar
Introduction to the Data Science & Deep Learning Virtual Machines (DSVM/DLVM) offerings in Azure

This section introduces the DSVM and DLVM offerings and familiarizes participants with the functionality that they include.


The Typical AI Development Workflow
This section is a discussion about the typical end-to-end AI solution development workflow and how DSVM/DLVM enable data scientists and developers to build AI solutions.


Why Use DSVM and DLVM in the AI Development Workflow?

This section highlights the key benefits provided by DSVM and DLVM and the usage patterns they support at thousands of companies worldwide.


Barnam
@barnambora | LinkedIn: barnam

Resources

Categories: Latest Microsoft News

New GitHub location for AdventureWorks

Latest Microsoft Data Platform News - Thu, 11/16/2017 - 08:52

AdventureWorks has long been one of the most used database samples to run demos. Its downloads and scripts are now available in the SQL Server Samples repository in GitHub. If you have the download links in any of your scripts or automations, please update links to use the new download location.

With this, the Codeplex repository will be archived to read-only at the end of November. We left the AdventureWorks files intact on Codeplex so we wouldn’t break existing links. But, we ask everyone to start using the new GitHub location going forward.

The downloads and scripts on GitHub have these improvements:

  • The AdventureWorks and AdventureWorksDW install scripts work on any version of SQL Server. Each script auto-generates the right database compatibility to match the current SQL Server instance. This means you can quickly install either database on any release of SQL Server including CTPs, SPs, and any interim release.
  • The AdventureWorks download page lists all the .bak files for each SQL Server version, including SQL Server 2016 and SQL Server 2017.
  • The SQL Server 2016 CTP3 sample databases are published on the site as .bak files and renamed to AdventureWorks2016_EXT and AdventureWorksDW2016_EXT.

Please try out the new downloads and scripts. If you have any issues or comments, please send feedback via this blog, or in GitHub.

Categories: Latest Microsoft News

New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications

Latest Oracle Database News - Thu, 11/16/2017 - 05:00
Press Release New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications Tops AWS with 1,214% better storage performance at 88% lower cost per IO

Redwood Shores, Calif.—Nov 16, 2017

Oracle today announced the general availability of a range of new Oracle Cloud Infrastructure compute options, providing customers with unparalleled compute performance based on Oracle’s recently announced X7 hardware. Newly enhanced virtual machine (VM) and bare metal compute, and new bare metal graphical processing unit (GPU) instances enable customers to run even the most infrastructure-heavy workloads such as high-performance computing (HPC), big data, and artificial intelligence (AI) faster and more cost-effectively.

Unlike competitive offerings, Oracle Cloud Infrastructure is built to meet the unique requirements of enterprises, offering predictable performance for enterprise applications while bringing cost efficiency to HPC use cases. Oracle delivers 1,214 percent better storage performance at 88 percent lower cost per input/output operation (IO)1.

New Innovations Drive Unrivaled Performance at Scale

All of Oracle Cloud Infrastructure’s new compute instances leverage Intel’s latest Xeon processors based on the Skylake architecture. Oracle’s accelerated bare metal shapes are also powered by NVIDIA Tesla P100 GPUs, based on the Pascal architecture. Providing 28 cores, dual 25Gb network interfaces for high-bandwidth requirements and over 18 TFLOPS of single-precision performance per instance, these GPU instances accelerate computation-heavy use cases such as reservoir modeling, AI, and Deep Learning.

Oracle also plans to soon release NVIDIA Volta architecture-powered instances with 8 NVIDIA Tesla V100 GPUs interconnected via NVIDIA NVLINK to generate over 125 TFLOPS of single-precision performance. Unlike the competition, Oracle will offer these GPUs as both virtual machines and bare metal instances.  Oracle will also provide pre-configured images for fast deployment of use cases such as AI. Customers can also leverage TensorFlow or Caffe toolkits to accelerate HPC and Deep Learning use cases.

“Only Oracle Cloud Infrastructure provides the compute, storage, networking, and edge services necessary to deliver the end-to-end performance required of today’s modern enterprise,” said Kash Iftikhar, vice president of product management, Oracle. “With these latest enhancements, customers can avoid additional hardware investments on-premises and gain the agility of the cloud. Oracle Cloud Infrastructure offers them tremendous horsepower on-demand to drive competitive advantage.”

In addition, Oracle’s new VM standard shape is now available in 1, 2, 4, 8, 16, and 24 cores, while the bare metal standard shape offers 52 cores, the highest Intel Skylake-based CPU count per instance of any cloud vendor. Combined with its high-scale storage capacity, supporting up to 512 terabytes (TB) of non-volatile memory express (NVMe) solid state drive (SSD) remote block volumes, these instances are ideal for traditional enterprise applications that require predictable storage performance.

The Dense I/O shapes are also available in both VM and bare metal instances and are optimal for HPC, database applications, and big data workloads. The bare metal Dense I/O shape is capable of over 3.9 million input/output operations per second (IOPS) for write operations. It also includes 51 TB of local NVMe SSD storage, offering 237 percent more capacity than competing solutions1.

Furthermore, Oracle Cloud Infrastructure has simplified management of virtual machines by offering a Terraform provider for single-click deployment of single or multiple compute instances for clustering. In addition, a Terraform-based Kubernetes installer is available for deployment of highly available, containerized applications.

By delivering compute solutions that leverage NVIDIA’s latest technologies, Oracle can dramatically accelerate its customers’ HPC, analytics and AI workloads. “HPC, AI and advanced analytic workloads are defined by an almost insatiable hunger for compute,” said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. “To run these compute-intensive workloads, customers require enterprise-class accelerated computing, a need Oracle is addressing by putting NVIDIA Tesla V100 GPU accelerators in the Oracle Cloud Infrastructure.”

“The integration of TidalScale's inverse hypervisor technology with Oracle Cloud Infrastructure enables organizations, for the first time, to run their largest workloads across dozens of Oracle Cloud bare metal systems as a single Software-Defined Server in a public cloud environment,” said Gary Smerdon, chief executive officer, TidalScale, Inc. “Oracle Cloud customers now have the flexibility to configure, deploy and right-size servers to fit their compute needs while paying only for what they use.”

“Cutting-edge hardware can make all the difference for companies we work with like Airbus, ARUP and Rolls Royce,” said Jamil Appa, co-founder and director of Zenotech. “We’ve seen significant improvements in performance with the X7 architecture. Oracle Cloud Infrastructure is a no-brainer for compute-intensive HPC workloads.”

1 Based on comparison to AWS i3.16XL using industry-standard CloudHarmony benchmark, a measure of storage performance across a range of workloads. For more information, see: https://blogs.oracle.com/cloud-infrastructure/high-performance-x7-compute-service-review-analysis

Contact Info Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5146
kreeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Greg Lunsford

  • +1.650.506.6523

Kristin Reeves

  • +1.415.856.5146

Follow Oracle Corporate

Categories: Latest Oracle News

New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications

Latest Oracle Press Releases - Thu, 11/16/2017 - 05:00
Press Release New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications Tops AWS with 1,214% better storage performance at 88% lower cost per IO

Redwood Shores, Calif.—Nov 16, 2017

Oracle today announced the general availability of a range of new Oracle Cloud Infrastructure compute options, providing customers with unparalleled compute performance based on Oracle’s recently announced X7 hardware. Newly enhanced virtual machine (VM) and bare metal compute, and new bare metal graphical processing unit (GPU) instances enable customers to run even the most infrastructure-heavy workloads such as high-performance computing (HPC), big data, and artificial intelligence (AI) faster and more cost-effectively.

Unlike competitive offerings, Oracle Cloud Infrastructure is built to meet the unique requirements of enterprises, offering predictable performance for enterprise applications while bringing cost efficiency to HPC use cases. Oracle delivers 1,214 percent better storage performance at 88 percent lower cost per input/output operation (IO)1.

New Innovations Drive Unrivaled Performance at Scale

All of Oracle Cloud Infrastructure’s new compute instances leverage Intel’s latest Xeon processors based on the Skylake architecture. Oracle’s accelerated bare metal shapes are also powered by NVIDIA Tesla P100 GPUs, based on the Pascal architecture. Providing 28 cores, dual 25Gb network interfaces for high-bandwidth requirements and over 18 TFLOPS of single-precision performance per instance, these GPU instances accelerate computation-heavy use cases such as reservoir modeling, AI, and Deep Learning.

Oracle also plans to soon release NVIDIA Volta architecture-powered instances with 8 NVIDIA Tesla V100 GPUs interconnected via NVIDIA NVLINK to generate over 125 TFLOPS of single-precision performance. Unlike the competition, Oracle will offer these GPUs as both virtual machines and bare metal instances.  Oracle will also provide pre-configured images for fast deployment of use cases such as AI. Customers can also leverage TensorFlow or Caffe toolkits to accelerate HPC and Deep Learning use cases.

“Only Oracle Cloud Infrastructure provides the compute, storage, networking, and edge services necessary to deliver the end-to-end performance required of today’s modern enterprise,” said Kash Iftikhar, vice president of product management, Oracle. “With these latest enhancements, customers can avoid additional hardware investments on-premises and gain the agility of the cloud. Oracle Cloud Infrastructure offers them tremendous horsepower on-demand to drive competitive advantage.”

In addition, Oracle’s new VM standard shape is now available in 1, 2, 4, 8, 16, and 24 cores, while the bare metal standard shape offers 52 cores, the highest Intel Skylake-based CPU count per instance of any cloud vendor. Combined with its high-scale storage capacity, supporting up to 512 terabytes (TB) of non-volatile memory express (NVMe) solid state drive (SSD) remote block volumes, these instances are ideal for traditional enterprise applications that require predictable storage performance.

The Dense I/O shapes are also available in both VM and bare metal instances and are optimal for HPC, database applications, and big data workloads. The bare metal Dense I/O shape is capable of over 3.9 million input/output operations per second (IOPS) for write operations. It also includes 51 TB of local NVMe SSD storage, offering 237 percent more capacity than competing solutions1.

Furthermore, Oracle Cloud Infrastructure has simplified management of virtual machines by offering a Terraform provider for single-click deployment of single or multiple compute instances for clustering. In addition, a Terraform-based Kubernetes installer is available for deployment of highly available, containerized applications.

By delivering compute solutions that leverage NVIDIA’s latest technologies, Oracle can dramatically accelerate its customers’ HPC, analytics and AI workloads. “HPC, AI and advanced analytic workloads are defined by an almost insatiable hunger for compute,” said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. “To run these compute-intensive workloads, customers require enterprise-class accelerated computing, a need Oracle is addressing by putting NVIDIA Tesla V100 GPU accelerators in the Oracle Cloud Infrastructure.”

“The integration of TidalScale's inverse hypervisor technology with Oracle Cloud Infrastructure enables organizations, for the first time, to run their largest workloads across dozens of Oracle Cloud bare metal systems as a single Software-Defined Server in a public cloud environment,” said Gary Smerdon, chief executive officer, TidalScale, Inc. “Oracle Cloud customers now have the flexibility to configure, deploy and right-size servers to fit their compute needs while paying only for what they use.”

“Cutting-edge hardware can make all the difference for companies we work with like Airbus, ARUP and Rolls Royce,” said Jamil Appa, co-founder and director of Zenotech. “We’ve seen significant improvements in performance with the X7 architecture. Oracle Cloud Infrastructure is a no-brainer for compute-intensive HPC workloads.”

1 Based on comparison to AWS i3.16XL using industry-standard CloudHarmony benchmark, a measure of storage performance across a range of workloads. For more information, see: https://blogs.oracle.com/cloud-infrastructure/high-performance-x7-compute-service-review-analysis

Contact Info Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5146
kreeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Greg Lunsford

  • +1.650.506.6523

Kristin Reeves

  • +1.415.856.5146

Follow Oracle Corporate

Categories: Latest Oracle News

DSC Resource Kit Release November 2017

Latest Microsoft Server Management News - Wed, 11/15/2017 - 18:14

We just released the DSC Resource Kit!

This release includes updates to 10 DSC resource modules. In these past 6 weeks, 53 pull requests have been merged and 50 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • SecurityPolicyDsc
  • xAdcsDeployment
  • xComputerManagement
  • xDnsServer
  • xExchange
  • xNetworking
  • xPSDesiredStateConfiguration
  • xSQLServer
  • xStorage
  • xWebAdministration

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our last community call for the DSC Resource Kit was last week on Novemeber 8. A recording of our updates as well as summarizing notes will be available soon. Join us for the next call at 12PM (Pacific time) on December 20 to ask questions and give feedback about your experience with the DSC Resource Kit.
Also, due to the Christmas holiday, the next DSC Resource Kit release will go out on Decemeber 20 instead of the normal 6-week cadence which would be December 27.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

All resources with the ‘x’ prefix in their names are still experimental – this means that those resources are provided AS IS and are not supported through any Microsoft support program or service. If you find a problem with a resource, please file an issue on GitHub.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or Changelog.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module Name Version Release Notes SecurityPolicyDsc 2.1.0.0
  • Updated SecurityOption to handle multi-line logon messages
  • SecurityOption: Added logic and example to handle scenario when using Interactive_logon_Message_text_for_users_attempting_to_log_on
xAdcsDeployment 1.3.0.0
  • Updated to meet HQRM guidelines – fixes issue 33.
  • Fixed markdown rule violations in README.MD.
  • Change examples to meet HQRM standards and optin to Example validation tests.
  • Replaced examples in README.MD to links to Example files.
  • Added the VS Code PowerShell extension formatting settings that cause PowerShell files to be formatted as per the DSC Resource kit style guidelines.
  • Opted into Common Tests “Validate Module Files” and “Validate Script Files”.
  • Corrected description in manifest.
  • Added .github support files:
    • CONTRIBUTING.md
    • ISSUE_TEMPLATE.md
    • PULL_REQUEST_TEMPLATE.md
  • Resolved all PSScriptAnalyzer warnings and style guide warnings.
  • Converted all tests to meet Pester V4 guidelines – fixes issue 32.
  • Fixed spelling mistakes in README.MD.
  • Fix to ensure exception thrown if failed to install or uninstall service – fixes issue 3.
  • Converted AppVeyor.yml to use shared AppVeyor module in DSCResource.Tests – fixes issue 29.
xComputerManagement 3.1.0.0
  • xOfflineDomainJoin:
    • Updated to meet HQRM guidelines.
xDnsServer 1.9.0.0
  • Added resource xDnsServerSetting
  • MSFT_xDnsRecord: Added DnsServer property
xExchange 1.17.0.0
  • Fix issue where test for Unlimited quota fails if quota is not already set at Unlimited
xNetworking 5.3.0.0
  • MSFT_xProxySettings:
    • Created new resource configuring proxy settings.
  • MSFT_xDefaultGatewayAddress:
    • Correct 2-SetDefaultGateway.md address family and improve example description – fixes Issue 275.
  • MSFT_xIPAddress:
    • Corrected style and formatting to meet HQRM guidelines.
    • Converted exceptions to use ResourceHelper functions.
    • Changed unit tests so that they can be run in any order.
  • MSFT_xNetAdapterBinding:
    • Corrected style and formatting to meet HQRM guidelines.
    • Converted exceptions to use ResourceHelper functions.
xPSDesiredStateConfiguration 8.0.0.0
  • xDSCWebService
    • BREAKING CHANGE: The Pull Server will now run in a 64 bit IIS process by default. Enable32BitAppOnWin64 needs to be set to TRUE for the Pull Server to run in a 32 bit process.
xSQLServer 9.0.0.0
  • Changes to xSQLServer
    • Updated Pester syntax to v4
    • Fixes broken links to issues in the CHANGELOG.md.
  • Changes to xSQLServerDatabase
    • Added parameter to specify collation for a database to be different from server collation (issue 767).
    • Fixed unit tests for Get-TargetResource to ensure correctly testing return values (issue 849)
  • Changes to xSQLServerAlwaysOnAvailabilityGroup
    • Refactored the unit tests to allow them to be more user friendly and to test additional SQLServer variations.
      • Each test will utilize the Import-SQLModuleStub to ensure the correct module is loaded (issue 784).
    • Fixed an issue when setting the SQLServer parameter to a Fully Qualified Domain Name (FQDN) (issue 468).
    • Fixed the logic so that if a parameter is not supplied to the resource, the resource will not attempt to apply the defaults on subsequent checks (issue 517).
    • Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified, the resource will only determine if a change is needed if the target node is the active host of the SQL Server instance (issue 868).
  • Changes to xSQLServerAlwaysOnAvailabilityGroupDatabaseMembership
    • Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified, the resource will only determine if a change is needed if the target node is the active host of the SQL Server instance (issue 869).
  • Changes to xSQLServerAlwaysOnAvailabilityGroupReplica
    • Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified, the resource will only determine if a change is needed if the target node is the active host of the SQL Server instance (issue 870).
  • Added the CommonTestHelper.psm1 to store common testing functions.
    • Added the Import-SQLModuleStub function to ensure the correct version of the module stubs are loaded (issue 784).
  • Changes to xSQLServerMemory
    • Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified, the resource will only determine if a change is needed if the target node is the active host of the SQL Server instance (issue 867).
  • Changes to xSQLServerNetwork
    • BREAKING CHANGE: Renamed parameter TcpDynamicPorts to TcpDynamicPort and changed type to Boolean (issue 534).
    • Resolved issue when switching from dynamic to static port. configuration (issue 534).
    • Added localization (en-US) for all strings in resource and unit tests (issue 618).
    • Updated examples to reflect new parameters.
  • Changes to xSQLServerRSConfig
    • Added examples
  • Added resource
    • xSQLServerDatabaseDefaultLocation (issue 656)
  • Changes to xSQLServerEndpointPermission
    • Fixed a problem when running the tests locally in a PowerShell console it would ask for parameters (issue 897).
  • Changes to xSQLServerAvailabilityGroupListener
    • Fixed a problem when running the tests locally in a PowerShell console it would ask for parameters (issue 897).
  • Changes to xSQLServerMaxDop
    • Made the resource cluster aware. When ProcessOnlyOnActiveNode is specified, the resource will only determine if a change is needed if the target node is the active host of the SQL Server instance (issue 882).
xStorage 3.3.0.0
  • Opted into common tests for Module and Script files – See Issue 115.
  • xDisk:
    • Added support for Guid Disk Id type – See Issue 104.
    • Added parameter AllowDestructive – See Issue 11.
    • Added parameter ClearDisk – See Issue 50.
  • xDiskAccessPath:
    • Added support for Guid Disk Id type – See Issue 104.
  • xWaitForDisk:
    • Added support for Guid Disk Id type – See Issue 104.
  • Added .markdownlint.json file to configure markdown rules to validate.
  • Clean up Badge area in README.MD – See Issue 110.
  • Disabled MD013 rule checking to enable badge table.
  • Added .github support files:
    • CONTRIBUTING.md
    • ISSUE_TEMPLATE.md
    • PULL_REQUEST_TEMPLATE.md
  • Changed license year to 2017 and set company name to Microsoft Corporation in LICENSE.MD and module manifest – See Issue 111.
  • Set Visual Studio Code setting “powershell.codeFormatting.preset” to “custom” – See Issue 108
  • Added Documentation and Examples section to Readme.md file – see issue 116.
  • Prevent unit tests from DSCResource.Tests from running during test execution – fixes Issue 118.
xWebAdministration 1.19.0.0
  • xWebAppPoolDefaults now returns values. Fixes 311.
  • Added unit tests for xWebAppPoolDefaults. Fixes 183.
How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available in WMF 5.0) to find modules with DSC Resources:

# To list all modules that are part of the DSC Resource Kit Find-Module -Tag DSCResourceKit # To list all DSC resources from all sources Find-DscResource

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module -Name < module name >

For example:

Install-Module -Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the xCertificate module, go to:
https://github.com/PowerShell/xCertificate.

All DSC modules are also listed as submodules of the DscResources repository in the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Keim
Software Engineer
PowerShell DSC Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

Categories: Latest Microsoft News

Good practices for sharing data in spreadsheets

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 15:07

Spreadsheets are powerful tools with many applications: collecting data, sharing data, visualizing data, analyzing data, reporting on data. Sometimes, the temptation to do all of these things in a single workbook is irresistible. But if your goal is to provide data to others for analysis, then features that are useful for, say, reporting are downright detrimental to the task of data analysis. To make things easier on your downstream analysts, and to reduce the risk of inadvertent errors that can be caused by spreadsheets, Karl Broman and Kara Woo have published a paper Data organization in spreadsheets chock-full of useful advice. To reiterate from their introduction:

Spreadsheets are often used as a multipurpose tool for data entry, storage, analysis, and visualization. Most spreadsheet programs allow users to perform all of these tasks, however we believe that spreadsheets are best suited to data entry and storage, and that analysis and visualization should happen separately. Analyzing and visualizing data in a separate program, or at least in a separate copy of the data file, reduces the risk of contaminating or destroying the raw data in the spreadsheet.

The paper provides a wealth of helpful tips for making data in spreadsheets ready for analysis:

Basic Data Practices: use consistent data codes and variable names, for example.

Naming Practices: don't use spaces in column names or file names, and make those names meaningful.

Dealing with Dates: avoid a common gotcha by using the ISO 8601 standard for dates, as shown in the XKCD comic to the right.

Representing missing data: don't use empty cells for missing data; use a hyphen or a unique code like NA (but not a number like -9999).

Don't overload the cells. Don't try and pack more than one piece of information into a cell (and don't merge cells, either). In particular, don't encode useful information with color or font; add another column instead.

Follow tidy data principles. Make the data rectangular. If you need multiple rectangles, use multiple files or worksheets. If your data still doesn't fit into that format, a spreadsheet probably wasn't the best place for it in the first place.

Don't make calculations in data files. You're preparing this data for analysis, so avoid the temptation to include formulas to create new data. Just provide the raw data.

You can find details, examples, and lots more useful advice in the complete paper, which I encourage everyone to read at the link below.

The American Statistician: Data organization in spreadsheets (2017), Karl W. Broman and Kara H. Woo (via Sharon Machlis)

Categories: Latest Microsoft News

How to upgrade ConfigMgr to the latest version along with upgrading OS and SQL

Latest Microsoft Server Management News - Wed, 11/15/2017 - 14:33

Here is a step by step upgrade path from System Center Configuration Manager 2012 SP2 hosted on Windows Server 2012 R2 to System Center Configuration Manager 1702 or later hosted on Windows Server 2016.

These steps can be used if you want to upgrade Configuration Manager 2012 R2, R2 SP1 or SP2 version to Configuration Manager 1702, or upgrade your environment to the latest operating system and SQL server.

I strongly recommend you read Upgrade to System Center Configuration Manager before following the upgrade path.

Important points to remember:

  • You can directly upgrade Configuration Manager 2012 R2 ,R2 SP1 or SP2 to 1702, no update installation is required.
  • You will need to run TestDBupgrade before upgrading the Configuration Manager 2012 SP1 environment to Configuration Manager 1702. (If we are using standalone media for 1702)
  • You will need to download Configuration Manager 1702 setup files from your volume licensing website. When you have version 1702 baseline media, you can upgrade the following to a fully licensed version of System Center Configuration Manager version 1702:
    • An evaluation install of System Center Configuration Manager version 1702
    • System Center 2012 Configuration Manager with Service Pack 1
    • System Center 2012 Configuration Manager with Service Pack 2
    • System Center 2012 R2 Configuration Manager
    • System Center 2012 R2 Configuration Manager with Service Pack 1

Let get to it!

In this example we are running a Primary site (Configuration Manager 2012 SP2) on a Windows Server 2012 R2 OS with SQL Server 2014 hosting the Configuration Manager database on the same box. In below steps we are

  • Upgrading Base OS windows Server 2012 R2 to Windows Server 2016
  • Upgrading SQL Server 2014 R2 to SQL server 2016
  • Upgrading Configuration Manager 2012 SP2 to Configuration Manager 1702
Base OS Upgrade

If you plan to upgrade the base OS of the Configuration Manager primary site server, and upgrade SQL then follow these steps:

Take a backup of your existing primary site database, SMS backup, Source directory, SCCMContentLib (All the folders for SCCMContentlib). See  https://technet.microsoft.com/en-in/library/gg712697.aspx#BKMK_SupplementalBackup

  1. Rename your current Configuration Manager primary site server and create a new machine with Windows server 2016 installed.
  2. It should have same name, drive letters and drive structure as your earlier site server.
  3. Install all the pre-requisites for Configuration Manager. See Site and site system prerequisites for System Center Configuration Manager.

 

SQL upgrade

Please refer to this article for SQL upgrade:

https://docs.microsoft.com/en-us/sccm/core/plan-design/configs/support-for-sql-server-versions#upgrade-options-for-sql-server

As we are running SQL on primary site server locally, we need to install the upgraded version of SQL on this newly created server.

  1. In our previous server we were running SQL server 2014 (Microsoft SQL server 2014 (SP2-CU2)- 12.0.5522.0 (X64))

  1. On new Primary site server, we installed Microsoft SQL server 2016 Ent version (Microsoft SQL Server 2016 SP1 -13.0.4001.0 (X64))

3. Copied the smsbackup locally on the new server.

4. Copy CM_<Sitecode>.mdf and CM_<SiteCode>_log.ldf files from smsbackup location to the location on the new server exactly where it was stored in your old Primary (SQL) server. For example, if .mdf and .ldf files on your old server are stored in <G:MSSQL11Data> then it has to be stored in the same location on the new server.

SMSBackup Location:

Copy to below location (As per the previous server database files location):

  1. Now attach the database on the New server (SQL) and then run site recovery.

Click on Add and provide the location for .mdf and .ldf files copied from the backup and click ok.

With this we should see the Primary Site Database visible under Databases in SQL Server Management Studio.

Now we can start the site recovery Process

Primary Site Recovery

6. Run the Configuration Manager setup and recover the site using “Manually recovered DB option”.

https://technet.microsoft.com/en-us/library/gg712697.aspx#Recover

-Please follow below link for the Recovery process.

-General:   System Center 2012 Configuration Manager R2 – Disaster Recovery for Entire Hierarchy and Standalone Primary Site recovery scenarios     http://www.microsoft.com/en-us/download/details.aspx?id=44295

-After waiting for couple of hours, we can start the Upgrade process for Configuration Manager 1702.

-Copy the Configuration Manager 1702 Media locally on the Primary site server.

-Also take the current SMSbackup for the primary site.

-The following is a checklist of required and recommended actions to perform prior to upgrading to System Center Configuration Manager:

https://docs.microsoft.com/en-us/sccm/core/servers/deploy/install/upgrade-to-configuration-manager#bkmk_checklist

-Make sure that all the Tasks are verified before we go for the Upgrade.

-Also make sure that TestDbUpgrade is successful before we go for the upgrade.

Note: We need to take the backup of current Database and restore to a separate SQL server (Non- production SQL) and run TestDbUpgrade.

Test upgrade should not be run on production database.

-Now we can run the Configuration Manager 1702 media to start the upgrade process.

https://docs.microsoft.com/en-us/sccm/core/servers/deploy/install/upgrade-to-configuration-manager#bkmk_upgrade

NOTE– We can implement all the above steps in case we have multiple sites, to upgrade multiple sites, we need to perform the above steps from Top to bottom (First we need to Upgrade CAS then Primary sites)

–Rajat Choubey

Support Engineer, Microsoft

Categories: Latest Microsoft News

Gain Insights into the JFK Files with Azure Search and Cognitive Services

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 14:11

This post is by Corom Thompson, Principal Software Engineer at Microsoft.

On November 22nd, 1963, the President of the United States, John F. Kennedy, was assassinated. He was shot by a lone gunman named Lee Harvey Oswald while driving through the streets of Dallas in his motorcade. The assassination has been the subject of so much controversy that, 25 years ago, an act of Congress mandated that all documents related to the assassination be released this year. The first batch of released files has more than 6,000 documents totaling 34,000 pages, and the last drop of files contains at least twice as many documents. 

We’re all curious to know what’s inside them, but it would take decades to read through these. We approached this problem of gaining insights by using Azure Search and Cognitive Services to extract knowledge from this deluge of documents, using a continuous process that ingests raw documents, enriching them into structured information that enables you to explore the underlying data.

Today, at the Microsoft Connect(); 2017 event, we created the demo web site* shown in Figure 1 below – this is a web application that uses the AzSearch.js library and designed to give you interesting insights into this vast trove of information.


Figure 1 – JFK Files web application for exploring the released files

On the left you can see that the documents are broken down by the entities that were extracted from them. Already we know these documents are related to JFK, the CIA, and the FBI. Leveraging several Cognitive Services, including optical character recognition (OCR), Computer Vision, and custom entity linking, we were able to annotate all the documents to create a searchable tag index.

We were also able to create a visual map of these linked entities to demonstrate the relationships between the different tags and data. Below, in Figure 2, is the visualization of what happened when we searched this index for “Oswald”.


Figure 2 – Visualization of the entity linked mapping of tags for the search term “Oswald”

Through further investigation and linking, we were able to even identify that the entity linking Cognitive Service annotated this term with a connection to Wikipedia, and we quickly realized that the Nosenko who was identified in the documents was actually a KGB defector interrogated by the CIA, and these are audio tapes of the actual interrogation. It would have taken years to figure out these connections, but we were instead able to do this in minutes thanks to the power of Azure Search and Cognitive Services.

Another fun fact we learned is that the government was actually using SQL Server and a secured architecture to manage these documents in 1997, as seen in the architecture diagram in Figure 3 below.


Figure 3 – Architecture diagram from 1997 indicating SQL Server was used to manage these documents

We have created an architecture diagram of our own to demonstrate how this new AI-powered approach is orchestrating the data and pulling insights from it – see Figure 4 below.

This is the updated architecture we used to apply the latest and greatest Azure-powered developer tools to create these insightful web apps. Figure 4 displays this architecture using the same style from 54 years ago.


Figure 4 – Updated architecture of Azure Search and Cognitive Services

We’ll be making this code available soon, along with tutorials of how we built the solution – stay tuned for more updates and links on this blog.

Meanwhile, you can navigate through the online version of our application* and draw your own insights!

Corom

* Try typing a keyword into the Search bar up at the top of the demo site, to get started, e.g. “Oswald”.

Categories: Latest Microsoft News

Windows Server preview build 17035 available for Windows Insiders!

Latest Microsoft Server Management News - Wed, 11/15/2017 - 13:00

We’re at full speed ahead! Just three weeks ago, we made available Windows Server, version 1709, the first release in the new Semi-Annual Channel release cadence, and today we’re making available the first preview build for the next release in this channel. The preview build 17035 is now available for download for Windows Insiders.

The preview builds are an important part of the development process for Windows Server. It’s through the feedback from customers using these builds that we’re able to find and fix bugs, as well as continue to improve each release. In addition, within these releases we show some new features and capabilities so if you’re looking for what’s next, here’s your chance to see what’s coming.

Storage Spaces Direct ready for validation and feedback!

Yes, we told you this when we first launched version 1709. Storage Spaces Direct is the foundation of our hyper-converged solution and we’re continuing to evolve it. In this preview build we not only brought it back, but we’re adding some new and important updates to it, such as support for Data Deduplication, a largely requested feature for Storage Spaces Direct and ReFS. Starting in this build you’ll be able to save up to 50% of data footprint.

It’s important to remember though, that Windows Server 2016 (the latest Long-Term Servicing Channel release) continues to be the recommended version for production hyper-converged systems, and the Windows Server Software Defined program offers an end-to-end validated solution. The preview builds are not supported in production environments and the Semi-Annual Channel releases should only be considered for workloads that would benefit from the fastest release cadence.

Project Honolulu preview builds available for Insiders!

New to Insiders, an early update to the technical preview of Project “Honolulu”. Project “Honolulu” is a flexible, lightweight browser-based customer-deployed platform and solution for Windows Server management scenarios for troubleshooting, configuration, and maintenance. Project “Honolulu” technical preview 1711 build 01003 is now available to Insiders, before the public, and is our first update since our initial technical preview 1709 released for Ignite.

This new preview build of Project “Honolulu” brings some very exciting new features including support for managing Windows 10 in a new “Computer Management” solution, a new Remote Desktop tool for connecting directly to targeted machines, and some performance improvements and bug fixes.

How to get started, new features, and known issues

If you haven’t yet, sign up for the Windows Insiders program. To obtain the build, registered Insiders may navigate directly to the Windows Server Insider preview download page. You may also want to check the Getting Started with Server page to get a step-by-step guide on how to start using the Windows Server preview builds. A complete list of new features and known issues will be available in the Window Server Insiders space in the Tech Community.

The most important part of a frequent release cycle is to hear what’s working and what needs to be improved, so your feedback is extremely valued. Use your registered Windows 10 Insider device and use the Feedback Hub application. In the app, choose the “Server” category and then the appropriate subcategory for your feedback. Please indicate what build number you are providing feedback on.

We will continue to provide new preview builds prior to the next release in the Semi-Annual Channel, so keep an eye out to see what’s coming!

Categories: Latest Microsoft News

Exciting AI Platform &#038; Tools Announcements from Microsoft

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 09:58

Re-posted from the Azure blog.

We made some exciting AI-related announcements at Microsoft Connect(); 2017 earlier today. Specifically, we talked about how we’re making it even easier for developers and data scientists to infuse AI into new and existing apps with these new capabilities:

With these updates, the Microsoft AI platform – summarized in the picture below – now offers comprehensive cloud-based, on-premises, and edge support – in other words, all the infrastructure, tools, frameworks, services and solutions needed by developers, data scientists and businesses to infuse AI into their products and services.


Check out the original post here to learn about these updates in more detail, and about the innovative ways in which our customers and putting these new AI technologies to use in the real world.

ML Blog Team

Resources:

  • Visit http://www.azure.com/ai to learn more about how AI can augment and empower digital transformation efforts.
  • Visit the AI School to get up to speed with all the relevant AI technologies.
Categories: Latest Microsoft News

Artificial Intelligence and Machine Learning on the Cutting Edge

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 09:30

This post is authored by Ted Way, Senior Program Manager at Microsoft.

Today we are excited to announce the ability to bring intelligence to the edge with the integration of Azure Machine Learning and Azure IoT Edge. Businesses today understand how artificial intelligence (AI) and machine learning (ML) are critical to help them go from telling the “what happened” story to the “what will happen” and “how can we make it happen” story. The challenge is how to apply AI and ML to data that cannot make it to the cloud, for instance due to data sovereignty, privacy, bandwidth or other issues. With this integration, all models created using Azure Machine Learning can now be deployed to any IoT gateways and devices with the Azure IoT Edge runtime. These models are deployed to the edge in the form of containers and can run on very small footprint devices.

Intelligent Edge
Use Cases

There many use cases for the intelligent edge, where a model is trained in the cloud and then deployed to an edge device. For example, a hospital wants to use AI to identify lung cancer on CT scans. Due to patient privacy and bandwidth limitations, a large CT scan containing hundreds of images can be analyzed on a server in the hospital, rather than sending it to the cloud. Another example is predictive maintenance of equipment on an oil rig in the ocean. All the data from the sensors on the oil rig can be sent to a server on the oil rig, and ML models can predict whether equipment is about to break down. Some of that data can then be sent on to the cloud to get an overview of what’s happening across all oil rigs and for historical data analysis. Other examples include sensitive financial or personally identifiable data that can also be processed on edge devices instead of having to send data to the cloud.

Intelligent Edge Architecture

The picture below shows how to bring models to edge devices. In the example of finding lung cancer on CT scans, Azure ML can be used to train models in the cloud using big data and CPU or GPU clusters. The model is operationalized to a Docker container with a REST API, and the container image can be stored in a registry such as Azure Container Registry.

In the hospital, an Azure IoT Edge gateway such as a Linux server can be registered with Azure IoT Hub in the cloud. Using IoT Hub, the pipeline can be configured as a JSON file. For example, a data ingest container knows how to talk to devices, and the output of that container goes to the ML model. The edge configuration JSON file is deployed to the edge device, and the edge device knows to pull the right container images from container registries.

In this way, AI and ML can be applied to data in the hospital, whether from medical images, sensors, or anything else that generates data.


Azure ML Integration

All Azure ML models operationalized as Docker containers can also run on Azure IoT Edge devices. The container has a REST API that can be used to access the model. This container can be instantiated on Azure Container Service to scale out to as many requests as you need or deployed to any device that can run a Docker container.

For edge deployments, the Azure ML container can talk to the Azure IoT Edge runtime. The Azure IoT Edge runtime is installed separately on a device, and it brokers communication among containers. The model may still be accessed via the REST API, or data from the Azure IoT Edge message bus can be picked up and processed by the model via the Azure IoT Edge SDK that’s incorporated into the Azure ML container.


AI Toolkit for Azure IoT Edge

Today’s IoT sensors include the capability to sense temperature, vibration, humidity, etc. AI on the edge promises to revolutionize IoT sensors and sense things that were not possible before – such as visual defects in products on an assembly line, acoustic anomalies from machinery, or entity matching in text. The AI Toolkit for Azure IoT Edge is a great way to get started. It’s is a collection of scripts, code, and deployable containers that enable you to quickly get up and running to deploy AI on edge devices.

We cannot wait to see the exciting ways in which you start bringing AI and ML to the [cutting] edge. If all this sounds exciting to you, we’re always on the lookout for great people to join our team!  

Ted
@tedwinway

Categories: Latest Microsoft News

Announcing SQL Operations Studio for preview

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 09:30

We are excited to announce that SQL Operations Studio is now available in preview. SQL Operations Studio is a free, light-weight tool for modern database development and operations for SQL Server on Windows, Linux and Docker, Azure SQL Database and Azure SQL Data Warehouse on Windows, Mac or Linux machines.

Download SQL Operations Studio to get started.

It’s easy to connect to Microsoft SQL Server with SQL Operations Studio and perform routine database operations—overall lowering the learning curve for non-professional database administrators who have responsibility for maintaining their organization’s SQL-based data assets.

As more organizations adopt DevOps for application lifecycle management, developers and other non-professional database administrators find themselves taking responsibility for developing and operating databases. These individuals often do not have time to learn the intricacies of their database environment, making hard to perform even the most routine tasks. Microsoft SQL Operations Studio takes a prescriptive approach to performing routine tasks, allowing users to get tasks done fast while continuing to learn on the job.


Users can leverage their favorite command line tools (e.g. Bash, PowerShell, sqlcmd, bcp and ssh) in the integrated terminal window right within the SQL Operations Studio user interface. They can easily generate and execute CREATE and INSERT scripts for SQL database objects to create copies of their database for development or testing purposes. Database developers can increase their productivity with smart T-SQL code snippets and rich graphical experiences to create new databases and database objects (such as tables, views, stored procedures, users, logins, roles, etc.) or to update existing database objects. They also have the ability to create rich customizable dashboards to monitor and quickly detect performance bottlenecks in your SQL databases on-premises or in Azure.

SQL Operations Studio comes at an opportune time for users who use clients running macOS or Linux. Many users who use or plan to deploy SQL Server 2017, which became generally available in September 2017, also use Macs as their clients. They will now be able to use a free database development and operations tool from Microsoft that runs natively on their OS of choice.

SQL Operations Studio has been forked from Visual Studio Code with the objective to make it highly extensible. It’s built on an extensible microservices architecture and includes the SQL tools service built on .NET Core. Users will be able to download it from GitHub or Microsoft.

We hope you love this new tool.  It’s received great reviews from the community testing it in private beta and, with your feedback, we can make it even better. Join us in improving SQL Operations Studio by contributing directly at the GitHub repo.

If you have questions or would like to add comments, please use the comments section below. We would love to hear from you!

Categories: Latest Microsoft News

SQL Server 2017 Features Bring ‘Choice’ to Developers

Latest Microsoft Data Platform News - Wed, 11/15/2017 - 09:30

Data is everywhere today: in the cloud, on premises, and everywhere in between, tied up in systems of nearly endless complexity. Microsoft solutions allow developers to innovate while also scaling and growing their data infrastructure. In SQL Server 2016 SP1, SQL Server made available a consistent programmable surface layer for all its editions, making it easy to write applications that target any edition of the database.  This year’s release takes it a step further with native support for Linux and Docker.

Microsoft puts the needs of the developer front and center in its data solutions. We have created the most advanced set of tools to radically lower the barriers to getting data – of any type, from anywhere – into the application design and build process. Today with the preview of Microsoft SQL Operations Studio, you can now access, run and manage this data from the system of your choice, on Windows, Linux and Docker.

Committed to choice for both database platform and tools

SQL Server 2017 also makes it easier to drive innovation via a CI/CD pipeline with the support of Docker containers.  Since the Community Technology Preview of SQL Server 2017, there have been over 2 million Docker pulls.  You can use any container orchestration layer, as SQL Server 2017 effectively becomes an application component within your compiled code hosted in the container.  It is light weight and very fast to install – SQL Server on Linux installs in less than a minute.  As a result, you can update the entire software stack with each check-in and deployment. Learn more on DevOps using SQL Server.

There are also SQL Server client drivers for the major languages, including C#, Java, PHP, Node.js, Python, Ruby and C++. Any language that supports ODBC data sources should be able to use the ODBC drivers.  And any language based on the JVM should be able to use the JDBC or ODBC drivers. Choose any of the above languages to trial at our new hands-on labs.

In the spirit of choice, the data tools team today released SQL Operations Studio for public preview (see below). It is a light weight, cross-platform database development and operations tool designed to help non-database professionals with routine tasks necessary to update and maintain a database environment. It’s based on .NET Core and forks from Visual Studio Code, making it extremely extensible and easy to use. Download it, try it out, and please give us feedback via GitHub issues!

R + Python built-in for in-database analytics

The confluence of cloud, data and AI is driving unprecedented change. The ability to manage and manipulate data and to turn it into breakthrough actions and experiences, is foundational to innovation today. We view data as the catalyst to augment the human ingenuity, removing friction and driving innovation. We want to enable people to change and adapt quickly. Most of all, we want to equip today’s innovators and leaders to turn data into the insights and applications that will improve our world.

Developers and data scientists who explore and analyze data also have several new options.  Now that SQL Server 2017 on Windows supports R and Python natively, they can either write R or Python scripts in the text editor of choice, or they can embed their scripts directly into their SQL query in SQL Server Management Studio. See example below:

Or if the analysis calls for highly complex joins, SQL Server 2017 also supports graph-based analytics, making it possible to describe nodes and edges within a SQL query. See example below:

CREATE TABLE Product (ID INTEGER PRIMARY KEY, name VARCHAR(100)) AS NODE;

CREATE TABLE Supplier (ID INTEGER PRIMARY KEY, name VARCHAR(100)) AS NODE;

CREATE TABLE hasInventory AS EDGE;

CREATE TABLE located_at(address varchar(100)) AS EDGE;

 

 

In-memory + performance for blazing-fast applications

And we mentioned, choice does not need to sacrifice performance!  SQL Server 2017 also has some great performance enhancing features, including adaptive query processing (AQP) and automatic plan correction (APC).  AQP uses Adaptive Memory Grants in SQL Server to track and learn how much memory is used by a given query to right-size memory grants.  While APC ensures continuous performance by finding and fixing performance regressions.  Customers have been highly favorable of these features, including dv01 who switched off their OSS stack on AWS to move everything to their stack run on SQL Server.

In-Memory OLTP is the premier technology available in SQL Server and Azure SQL Database for optimizing performance of transaction processing, data ingestion, data load, and transient data scenarios.  Expect to see a 30x-100x increase in performance by keeping tables in-memory and using natively compiled queries.

A couple of steps to consider if you’re going to use In-Memory OLTP:

1. Recommended to set the database to the latest compatibility level, particularly for In-Memory OLTP:

ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 140; GO

2. When a transaction involves both a disk-based table and a memory-optimized table, it’s essential that the memory-optimized portion of the transaction operates at the transaction isolation level named SNAPSHOT:

ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON GO

3. Before you can create a memory-optimized table, you must first create a memory-optimized FILEGROUP and a container for data files:

ALTER DATABASE AdventureWorks ADD FILEGROUP AdventureWorks_mod CONTAINS memory_optimized_data GO  ALTER DATABASE AdventureWorks ADD FILE (NAME='AdventureWorks_mod', FILENAME='c:varoptmssqldataAdventureWorks_mod') TO FILEGROUP AdventureWorks_mod GO Security built-in at every level

Every edition of SQL Server provides a robust set of features designed to keep organizational data separate, secure, and safe. Two of the most interesting security features for developers are Always Encrypted and Row-Level Security.

Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (social security numbers), stored in Azure SQL Database or SQL Server databases. Always Encrypted allows customers to encrypt sensitive data inside their applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). The driver encrypts the data in sensitive columns before passing the data to the Database Engine, and automatically rewrites queries so that the semantics to the application are preserved. Similarly, the driver transparently decrypts data, stored in encrypted database columns, contained in query results. See the graphic below:

Row-Level Security (RLS) enables customers to control access to rows in a database table based on the characteristics of the user executing a query (for example, group membership or execution context).

Row-Level Security simplifies the design and coding of security in an application. Row-Level Security enables organizations to implement restrictions on data row access. For example, an organization can ensure that employees can access only those data rows that are pertinent to their department, or restrict a customer’s data access to only the data relevant to their company.

To configure Row-Level Security, follow the steps below:

1. Create user accounts to test Row-Level Security

USE AdventureWorks2014; GO CREATE USER Manager WITHOUT LOGIN; CREATE USER SalesPerson280 WITHOUT LOGIN;

2. Grant read access to users on required table

GRANT SELECT ON Sales.SalesOrderHeader TO Manager; GRANT SELECT ON Sales.SalesOrderHeader TO SalesPerson280;

3. Create a new schema and inline table-valued function

CREATE SCHEMA Security; GO CREATE FUNCTION Security.fn_securitypredicate(@SalesPersonID AS int) RETURNS TABLE WITH SCHEMABINDING AS RETURN SELECT 1 AS fn_securitypredicate_result WHERE ('SalesPerson' + CAST(@SalesPersonId as VARCHAR(16)) = USER_NAME()) OR (USER_NAME() = 'Manager');

4. Create a security policy adding the function as both a filter and block predicate on the table

CREATE SECURITY POLICY SalesFilter ADD FILTER PREDICATE Security.fn_securitypredicate(SalesPersonID) ON Sales.SalesOrderHeader, ADD BLOCK PREDICATE Security.fn_securitypredicate(SalesPersonID) ON Sales.SalesOrderHeader WITH (STATE = ON);

5. Execute the query to the required table as each user to see the result (can also alter the security policy to disable policy)

Thanks for joining us on this journey to SQL Server 2017. We hope you love it! Going forward, we will continue to invest in our cloud-first development model, to ensure that the pace of innovation stays fast, and that we can bring you even more and improved SQL Server features soon.

Here are a few links to get started:

Categories: Latest Microsoft News

Managed Services

If you do not understand your computer systems or your organization has computer maintenance needs, contact us and we'll schedule a free consultation to offload the computer headaches from you.

View details »

Database Administration

Need some assistance with a database or database maintenance? We can help with that and anything else related to databases.

View details »

Virtualization

Not sure how this could help you? Let us explain the benefits of virtualization and be sure you are using your hardware properly.

View details »