Feed aggregator

OSD Video Tutorial: Part 14 – Pre-staged Media

This session is part fourteen of an ongoing series focusing on Operating System Deployment in Configuration Manager. In it, Steven discusses the pre-staged media option for image deployment. The discussion includes a description of pre-staged media, the scenarios solved by pre-staged media and a demonstration of configuring and using pre-staged media.

The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies.

Next in the series, Steven discusses the intricacies of nested task sequences.

Posts in OSD - A Deeper Dive

Go straight to the Deeper Dive playlist

OSD Video Tutorial Overview

 

 

Categories: Latest Microsoft News

Because it’s Friday: A Titanic Brexit

Boris Johnson once declared that Britain leaving the European Union would be a "Titanic success". More than a year ago Comedy Central UK imagined what such a success would look like, and the prospects look much the same today: 

That's all from us for this week. Have a great weekend, and see you later! 

Categories: Latest Microsoft News

AI, Machine Learning and Data Science Roundup: August 2018

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications I've noted over the past month or so.

Open Source AI, ML & Data Science News

ONNX Model Zoo is now available, providing a library of pre-trained state-of-the-art models in deep learning in the ONNX format.

In the 2018 IEEE Spectrum Top Programming Language rankings, Python takes the top spot and R ranks #7.

Julia 1.0 has been released, marking the stabilization of the scientific computing language and promising forwards compatibility.

A retrospective on the R Project, now in its 25th year, from Significance Magazine.

Industry News

Google announces Cloud AutoML, a beta service to train vision, text categorization, or language translation models from provided data. The fast.ai blog evaluates the claim that AutoML works without the need for machine learning skills.

Google announces Edge TPU, a hardware chip and associated software to bring AI capabilities to edge devices.

Oracle open-sources GraphPipe, a network protocol designed to simplify and standardize transmission of machine learning data between remote processes.

AWS Deep Learning AMIs now include ONNX, making it easier to deploy exported deep learning models.

Amazon Rekognition is available in two new regions: Seoul and Mumbai. The video analysis service caused a stir when the ACLU reported it matching pictures of members of Congress to a database of criminal mugshots.

RStudio introduces Package Manager, a repository management server product to organize and centralize R packages.

Data from the O'Reilly Machine Learning Adoption Survey reveal that most machine learning models are built by in-house data science or product development teams, with only 3% adopting cloud-based ML service.

Anaconda Enterprise 5.2 adds capabilities for GPU-accelerated machine learning.

Microsoft News

TechNative's review of Microsoft's AI philosophy and technologies, and the roadmap ahead.

Microsoft R Open 3.5.1 is now available.

Power BI Desktop now offers Python integration (in preview).

AI for Accessibility, a $25M Microsoft program applying artificial intelligence to help people with disabilities.

Azure SQL Database now offers reserved capacity at a discounted rate, and reservation discounts for virtual machines now apply when using differing VM sizes within a group.

Learning resources

The Chartmaker Directory, an index of dozens of data visualization types with examples in more than 30 software tools.

An overview of the benefits and limitations of FPGUs compared to CPUs and GPUs for numeric computing.

A guide to deep learning applied to natural language processing, for those new to the field.

Containerized R and Python Web services with Docker and Microsoft ML Server.

Design, architecture, and implementation of an R-based recommendation system in Azure.

A guide to distributed deep learning in Spark, using Azure Batch and Azure HDInsight.

Google's Machine Learning Guides, with machine learning tips and a guide to text classification.

Field Guide to Machine Learning, a 6-part video series from Facebook Research.

What Data Scientists Really Do, according to a Harvard Business Review article.

Survey analytics company Crunch uses R to provide a data visualization service.

Finding optimal staff composition for a professional services company, with Azure Machine Learning.

Find previous editions of the monthly AI roundup here

Categories: Latest Microsoft News

Database ownership chaining in Azure SQL Managed Instance

Latest Microsoft Data Platform News - 10 hours 22 min ago

Azure SQL Managed Instance enables you to run cross-database queries the same way you do it in SQL Server. It also supports cross-database ownership chaining that will be explained in this post.

Cross database ownership chaining enables logins to access the objects in other databases on the SQL instance even if explicit access permissions are not granted on these objects, if the logins are accessing the objects via some view or procedure, if view/procedure and the objects in other database have the same owner, and if DB_CHAINING option is turned on on the database.

In this case, if you have the same owner on several objects in several databases, and you have some stored procedure that access these objects, you don’t need to GRANT access permission to every object that the procedure needs to access. If the procedure and the objects have the same owner, you can to GRANT permission on the procedure and Database Engine will allow the procedure to access all other objects that share the same owner.

In this example, I will create two databases that have the same owner and a login that will be used to access the data. One database will have some table and other database will have a stored procedure that reads data from the table in other database. Login will be granted to execute the stored procedure, but not to read data from the table:

-- Create two databases and a login that will call procedure in one database CREATE DATABASE PrimaryDatabase; GO CREATE DATABASE SecondaryDatabase; GO CREATE LOGIN TheLogin WITH PASSWORD = 'Very strong password!' GO -- Create one database with some data table, and another database with a procedure that access the data table. USE PrimaryDatabase; GO CREATE PROC dbo.AccessDataTable AS BEGIN SELECT COUNT(*) FROM SecondaryDatabase.dbo.DataTable; END; GO CREATE USER TheUser FOR LOGIN TheLogin; GO GRANT EXECUTE ON dbo.AccessDataTable TO TheUser; GO USE SecondaryDatabase; GO SELECT * INTO dbo.DataTable FROM sys.objects; GO CREATE USER TheUser FOR LOGIN TheLogin; GO

If you try to read the table via procedure you will get an error because the login don’t have GRANT permission on the table:

EXECUTE('SELECT * FROM SecondaryDatabase.dbo.DataTable') AS LOGIN = 'TheLogin' ; GO -- Msg 229, Level 14, State 5, Line 34 -- The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'.

The same thing will happen if you try to read the data from the table using the stored procedure:

EXECUTE('EXEC PrimaryDatabase.dbo.AccessDataTable') AS LOGIN = 'TheLogin' ; GO --Msg 229, Level 14, State 5, Procedure dbo.AccessDataTable, Line 5 [Batch Start Line 65] --The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'.

Although the user has the rights to execute the procedure, Database Engine will block the query since the login don’t have access rights to read from the underlying table in SecondaryDatabase.

Now, we can enable ownership chaining on the databases:

ALTER DATABASE PrimaryDatabase SET DB_CHAINING ON; GO ALTER DATABASE SecondaryDatabase SET DB_CHAINING ON; GO

If we try to access table again via procedure we are getting the results:

EXECUTE('EXEC PrimaryDatabase.dbo.AccessDataTable') AS LOGIN = 'TheLogin' ;

Managed Instance/Database Engine will see that procedure and table have the same owner, and since DB_CHAINING is turned on, it will allow access to the table.

However, note that the login still don’t have rights to access the table directly because nobody granted him access:

EXECUTE('SELECT * FROM SecondaryDatabase.dbo.DataTable') AS LOGIN = 'TheLogin' ; GO --Msg 229, Level 14, State 5, Line 54 --The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'. Conclusion

Database ownership chaining might be useful but also unexpected behavior from the security perspective. You would need to carefully analyze when and do you want to configure it.

Categories: Latest Microsoft News

Make R speak

Latest Microsoft Data Platform News - Thu, 08/16/2018 - 15:59

Every wanted to make R talk to you? Now you can, with the mscstts package by John Muschelli. It provides an interface to the Microsoft Cognitive Services Text-to-Speech API (hence the name) in Azure, and you can use it to convert any short piece of text to a playable audio file, rendering it as speech using a number of different voices.

Before you can generate speech yourself, you'll need a Bing Speech API key. If you don't already have an Azure account, you can generate a free 7-day API key in seconds by visiting the Try Cognitive Services page, selecting "Speech APIs", and clicking "Get API Key" for "Bing Speech":

No credit card is needed; all you need is a Microsoft, Facebook, LinkedIn or Github account. (If you need permanent keys, you can create an Azure account here which you can use for 5000 free Speech API calls per month, and also get $200 in free credits for use with any Azure service.)

Once you have your key (you'll actually get two, but you can use either one) you will call the ms_synthesize function to convert text up to 1,000 characters or so into mp3 data, which you can then save to a file. (See lines 8-10 in the speakR.R script, below.) Then, you can play the file with any MP3 player on your system. On Windows, the easiest way to do that is to call start on the MP3 file itself, which will use your system default. (You'll need to modify line 11 of the script to work non-Windows systems.)

Saving the data to a file and then invoking a player got a bit tedious for me, so I created the say function in the script below to automate the process. Let's see it in action:

Note that you can choose from a number of accents and spoken languages (including British English, Canadian French, Chinese and Japanese), and the gender of the voice (though both female and male voices aren't available for all languages).  You can even modify volume, pitch, speaking rate, and even the pronunciation of individual words using the SSML standard. (This does mean you can't use characters recognized as SSML in your text, which is why the say function below filters out < > and / first.) 

The mscstts package is available for download now from your favorite CRAN mirror, and you can find the latest development version on Github. Many thanks to John Muschelli for putting this handy package together!

Categories: Latest Microsoft News

OSD Video Tutorial: Part 13 – to be Known or to be Unknown – that is the question

Latest Microsoft Server Management News - Wed, 08/15/2018 - 20:17

This session is part thirteen of an ongoing series focusing on Operating System Deployment in Configuration Manager. Steven discusses the concept of Known and Unknown computer imaging and provides demonstrations and detailed discussion of the advantages and disadvantages of each approach.

The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies.

Next in the series, Steven will give a discussion of the pre-staged media option for image deployment.

Posts in OSD - A Deeper Dive

Go straight to the Deeper Dive playlist

OSD Video Tutorial Overview

Categories: Latest Microsoft News

Breaking Into Windows Server 2019: Network Features: High Performance SDN Gateways

Latest Microsoft Server Management News - Wed, 08/15/2018 - 10:30

Another happy Wednesday to all of our great readers! Brandon Wilson here again to give you another pointer to some more information from the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. This time around, they are once again covering some of the new Software Defined Networking (SDN) capabilities in Windows Server 2019, however this time, they are touching on SDN gateways. Here is some initial information straight from the product group:

“This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #6 – High Performance SDN Gateways

Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!  Don’t forget to check out all the features in the Top 10!

Here’s an Excerpt:

Last we announced vast improvements to the management and deployment experience for SDN including Windows Admin Center interfaces!  This week we’re excited to announce SDN performance improvements in hybrid connectivity scenarios!

Organizations today deploy their applications across multiple clouds including on-premises private clouds, service provider clouds, and public clouds such as Azure. In such scenarios, enabling secure, high-performance connectivity across workloads in different clouds is essential. Windows Server 2019 brings huge SDN gateway performance improvements for these hybrid connectivity scenarios, with network throughput multiplying by up to 6x!!!


As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

Categories: Latest Microsoft News

Everything you need to know about Windows Server 2019 – Part 1

Latest Microsoft Server Management News - Wed, 08/15/2018 - 10:00

This blog post was authored by Vinicius Apolinario, Senior Product Marketing Manager, Windows Server.

You should know by now that Windows Server 2019 is available as a preview in the Windows Insiders program. In the last few months, the Windows Server team has been working tirelessly on some amazing new features. We wanted to share the goodness that you can expect in the product through a series of blog posts. This is the first in the series that will be followed by deep-dive blog posts by the engineering experts.

Windows Server 2019 has four main areas of investments and below is glimpse of each area.

  1. Hybrid: Windows Server 2019 and Windows Admin Center will make it easier for our customers to connect existing on-premises environments to Azure. With Windows Admin Center it also easier for customers on Windows Server 2019 to use Azure services such as Azure Backup, Azure Site Recovery, and more services will be added over time.
  2. Security: Security continues to be a top priority for our customers and we are committed to helping our customers elevate their security posture. Windows Server 2016 started on this journey and Windows Server 2019 builds on that strong foundation, along with some shared security features with Windows 10, such as Defender ATP for server and Defender Exploit Guard.
  3. Application Platform: Containers are becoming popular as developers and operations teams realize the benefits of running in this new model. In addition to the work we did in Windows Server 2016, we have been busy with the Semi-Annual Channel releases and all that work culminates in Windows Server 2019. Examples of these include Linux containers on Windows, the work on the Windows Subsystem for Linux (WSL), and the smaller container images.
  4. Hyper-converged Infrastructure (HCI): If you are thinking about evolving your physical or host server infrastructure, you should consider HCI. This new deployment model allows you to consolidate compute, storage, and networking into the same nodes allowing you to reduce the infrastructure cost while still getting better performance, scalability, and reliability.

To get you excited, we are kicking off the Windows Server 2019 blog series with Jeff Woolsey showing a brief overview on some cool new features that you should try today! To hear more, check outthe deep dive on Windows Server 2019 updates.

Download the Windows Server 2019 preview and give us feedback!

If you want to try all the new cool stuff that Jeff just showed you, download the Windows Server 2019 preview! More importantly, remember to use the Feedback App in your Windows 10 machine to give us feedback. You can also join the conversation on the Windows Server Tech Community space.

Categories: Latest Microsoft News

Top 10 Networking Features in Windows Server 2019: #6 High Performance SDN Gateways

Latest Microsoft Server Management News - Tue, 08/14/2018 - 17:00

 

Share On: Twitter      Share On: LinkedIn This blog is part of a series for the Top 10 Networking Features in Windows Server 2019! Look for the Try it out sections then give us some feedback in the comments! Don't forget to tune in next week for the next feature in our Top 10 list!

Organizations today deploy their applications across multiple clouds including on-premises private clouds, service provider clouds, and public clouds such as Azure. In such scenarios, enabling secure, high-performance connectivity across workloads in different clouds is essential. Windows Server 2019 brings huge SDN gateway performance improvements for these hybrid connectivity scenarios, with network throughput multiplying by up to 6x!!!

If you have deployed Software Defined Networking (SDN) with Windows Server 2016, you must be aware that, amongst other things, it provides connectivity between your cloud resources and enterprise resources through SDN gateways. In this article, we will talk about the following capabilities of SDN gateways:

  • IPsec tunnels provide secure connectivity over the Internet between your hybrid workloads
  • GRE tunnels provide connectivity between your workloads hosted in SDN virtual networks and physical resources in the datacenter/high speed MPLS networks. More details about GRE connectivity scenarios here.

In Windows Server 2016, one of the customer concerns was the inability of SDN gateway to meet the throughput requirements of modern networks. The network throughput of IPsec and GRE tunnels was limited, with the single connection throughput for IPsec connectivity being about 300 Mbps and for GRE connectivity being about 2.5 Gbps.

We have improved significantly in Windows Server 2019, with the numbers soaring to 1.8 Gbps and 15 Gbps for IPsec and GRE connections, respectively. All this, with huge reductions in the CPU cycles/per byte, thereby providing ultra-high-performance throughput with much less CPU utilization.

Let’s talk numbers

We have done extensive performance testing for the SDN gateways in our test labs. In the tests, we have compared gateway network performance with Windows Server 2019 in SDN scenarios and non-SDN scenarios. The results are shown below:

GRE Performance Numbers

Network throughput for GRE tunnels in Windows Server 2019 without SDN varies from 2 to 5 Gbps, with SDN it leapfrogs to the range of 3 to 15 Gbps!!!

Note that the network throughput in Windows Server 2016 is much less than network throughput in Windows Server 2019 without SDN. With Windows Server 2019 SDN, the comparison is even more stark.

 

The CPU cycles/byte without SDN varies from 50 to 75, while it barely crosses 10 with SDN!!!

 

IPsec Performance Numbers

For IPsec tunnels, the Windows Server 2019 SDN network throughput is about 1.8 Gbps for 1 tunnel and about 5 Gbps for 8 tunnels. Compare this to Windows Server 2016 where the network throughput of a single tunnel was 300 Mbps and the aggregate IPsec network throughput for a gateway VM was 1.8 Gbps.

 

The CPU cycles/byte without SDN varies from 50 to 90, while it is well within 50 with SDN!!!

With GRE, the aggregate SDN gateway network throughput scales to 15 Gbps and with IPsec, it can scale to 5 Gbps!!! Test Setup

The test setup simulates connectivity between the SDN gateway and on-prem gateway in a private lab environment. The on-prem gateway is configured with Windows Routing and Remote Access (RAS) to act as a VPN Site-to-Site endpoint. Following are the setup details on the SDN gateway host and the SDN gateway VM:

Gateway HOST

  1. There are two NUMA nodes on the host machine with 8 cores per NUMA node. RAM on the gateway host is 40 GB. The gateway VM has full access to one NUMA node. And it is different from the NUMA node used by the host.
  2. Hyper threading is disabled
  3. Receive side buffer and send side buffer on physical network adapters is set to 4096
  4. Receive side scaling (RSS) is enabled on the host physical network adapters. Min and max processors are set to be from the NUMA node which the host is affinitized to. MaxProcessors is set to 8 (number of cores per NUMA node). 
  5. Jumbo packets are set on the physical network adapters with value of 4088 bytes
  6. Receive Side Scaling is enabled in the vSwitch.

Gateway VM

  1. The gateway VM is allocated 8 GB of memory
  2. For the Internal and External network adapters, the Send Side Buffer is configured with 32 MB of RAM and Receive Side Buffer is configured with 16 MB of RAM
  3. Forwarding Optimization is enabled for the Internal and External network adapters.
  4. Jumbo packets are enabled on the Internal and External network adapters with value of 4088 bytes
  5. VMMQ is enabled on the internal port of the VM
  6. VMQ and VRSS is enabled on the external network adapter of the VM
See it in action

The short demo below showcases the improved performance throughput with Windows Server 2019. This demo uses a performance tool called ctsTraffic to measure the network throughput of a single IPsec connection through the SDN VPN gateway. Traffic is being sent from a customer workload machine in the SDN network to an on-prem enterprise resource across a simulated Internet. As you can see, with Windows Server 2016, the network throughput of a single IPsec connection is only about 300 Mbps, while with Windows Server 2019, the network throughput scales to about 1.8 Gbps.

 

Try it out

For GRE connections, you should automatically see the improved performance once you deploy/upgrade to Windows Server 2019 builds on the gateway VMs. No manual steps are involved.

For IPsec connections, by default, when you create the connection for your virtual networks you will get the Windows Server 2016 data path and performance numbers. To enable the Windows Server 2019 data path, you will need to do the following:

  1. On an SDN gateway VM, go to Services console (services.msc).
  2. Find the service named “Azure Gateway Service”, and set the startup type of this service to “Automatic”
  3. Restart the gateway VM. Note that the active connections on this gateway will be failed over to a redundant gateway VM
  4. Repeat the previous steps for rest of the  gateway VMs
NOTE: For best performance results, ensure that the cipherTransformationConstant and authenticationTransformConstant in quickMode settings of the IPsec connection uses the “GCMAES256” cipher suite.

One more thing: To get maximum performance, the gateway host hardware must support AES-NI and PCLMULQDQ CPU instruction sets. These are available on any Westmere (32nm) and later Intel CPU except on models where AES-NI has been disabled. You can look at your hardware vendor documentation to see if the CPU supports AES-NI and PCLMULQDQ CPU instruction sets.

Ready to give it a shot!?   Download the latest Insider build and Try it out! We value your feedback

The most important part of a frequent release cycle is to hear what’s working and what needs to be improved, so your feedback is extremely valued.

Contact us if you have any questions or having any issues for your deployment or validation. We also encourage you to visit send us email – sdninsider@microsoft.com to collaborate, share and learn from other customers like you.

Thanks for reading,

Anirban Paul

Categories: Latest Microsoft News

PowerShell Module Function Export in Constrained Language

Latest Microsoft Server Management News - Tue, 08/14/2018 - 13:39
PowerShell Module Exporting Functions in Constrained Language

PowerShell offers a number of ways to expose functions in a script module. But some options have serious performance or security drawbacks. In this blog I describe these issues and provide simple guidance for creating performant and secure script modules. Look for a module soon in PSGallery that helps you update your modules to be compliant with this guidance.

When PowerShell is running in Constrained Language mode it adds some restrictions in how module functions can be exported. Normally, when PowerShell is not running in Constrained Language, all script functions defined in the module are exported by default.

# TestModule.psm1 function F1 { } function F2 { } function F3 { } # TestModule.psd1 @{ ModuleVersion = '1.0'; RootModule = 'TestModule.psm1' } # All functions (Function1, Function2, Function3) are exported and available Get-Module -Name TestModule -List | Select -ExpandProperty ExportedFunctions F1 F2 F3

This is handy and works well for simple modules. However, it can cause problems for more complex modules.

Performance Degradation

Command discovery is much slower when script functions are exported implicitly or explicitly using wildcard characters. This is because PowerShell has to parse all module script content to look for available functions and then match the found function names with a wildcard pattern. If the module uses explicit function export lists, then this parsing during discovery is not necessary. If you have a lot of custom script modules with many functions, the performance hit can become very noticeable. This principal also applies to exporting any other script element such as cmdlets, variables, aliases, and DSC resources.

# TestModule.psm1 function F1 { } function F2 { } function F3 { } ... # This wildcard function export has the same behavior as the default behavior, all module functions are exported and PowerShell has to parse all script to discover available functions Export-ModuleMember -Function '*' Confused Intent

For large complex modules, exporting all defined functions is confusing to users as to how the module is intended to be used. The number of defined functions can be very large and the handful of user cmdlets can get lost in the noise. It is much better to export just the functions intended for the user and hide all helper functions.

# TestModule.psm1 function Invoke-Program { } function F1 { } function F2 { } ... function F100 { } # TestModule.psd1 @{ ModuleVersion = '1.0'; RootModule = 'TestModule.psm1'; FunctionsToExport = 'Invoke-Program' } Get-Module -Name TestModule -List | Select -ExpandProperty ExportedFunctions Invoke-Program Security

PowerShell runs in Constrained Language mode when a DeviceGuard or AppLocker policy is enforced on the system. This provides a good user shell experience while allowing trusted script modules to run in Full Language so that system management can still be done. For example, a user from the command line cannot use Add-Type to create and run arbitrary C# types, but a trusted script can.

So, it is important that a trusted script does not expose any vulnerabilities such as script injection or arbitrary code execution. Another type of vulnerability is leaking dangerous module functions not intended for public use. A helper function might take arbitrary source code and create a type intended to be used privately in a trusted context. But, if that helper function becomes publically available it exposes a code execution vulnerability.

# TestModule.psm1 function Invoke-Program { } # Private helper function function Get-Type { param( [string] $source ) Add-Type -TypeDefinition $source -PassThru } # Exposes *all* module functions! Export-ModuleMember -Function '*' Get-Module -Name TestModule -List | Select -ExpandProperty ExportedFunctions Invoke-Program Get-Type

In the above example, Get-Type module helper function is exported via wildcard along with the intended Invoke-Program function. Since this is a trusted module Get-Type runs in Full Language and exposes the ability to create arbitrary types.

Unintended Consequences

A major problem with exporting module functions using wildcards is that you may end up exporting functions unintentionally. For example, your module may specify other nested modules, or it may explicitly import other modules, or it may dot source script files into the module scope. All of those script functions will become publicly available if wild cards are used to export module functions.

# TestModule.psm1 import-Module HelperMod1 . .CSharpHelpers.ps1 function Invoke-Program { } # Exposes *all* module functions! Export-ModuleMember -Function '*' Get-Module -Name TestModule -List | Select -ExpandProperty ExportedFunctions Invoke-Program HelperFn1 HelperFn2 Compile-CSharp Module Function Export Restrictions

When PowerShell detects that an application whitelisting policy is enforced it runs in Constrained Language mode as mentioned previously, but it also applies some function export restrictions for imported modules. Remember that these restrictions only apply when PowerShell is running under DeviceGuard or AppLocker policy enforcement mode. Otherwise module function export works as before.

  • Wildcards are not allowed with the FunctionsToExport keyword in a module manifest (.psd1 file). If a wildcard is found in the keyword argument then no functions are exported in that module.
  • Wildcards are allowed in a module script file (.psm1). This is to provide backward compatibility but we strongly discourage it.
  • A module that uses wildcards to export functions, and at the same time dot sources script files into the module scope, will throw an error during module loading time. Note that if a psm1 file exports functions via wildcard, but it is imported under a manifest (psd1 file) that exports functions explicitly by name, then no error is thrown because the psd1 overrides any function export done within a psm1 file associated with the manifest. But if the psm1 file is imported directly (without the psd1 manifest file) then the error is thrown (see example below). Basically, the dot source operator cannot be used in module script along with wildcard based function export. It is too easy to inadvertently expose unwanted functions.

These restrictions are to help prevent inadvertent exposure of functions. By using wildcard based function export, you may be exposing dangerous functions without knowing it.

# TestModule.psm1 Import-Module HelperMod1 . .CSharpHelpers.ps1 function Invoke-Program { } Export-ModuleMember -Function '*' # TestModule.psd1 @{ ModuleVersion='1.0'; RootModule='TestModule.psm1'; FunctionsToExport='Invoke-Program' } # Importing the psm1 file directly results in error because of the wildcard function export and use of dot source operator Import-Module -Name TestModuleTestModule.psm1 Error: 'This module uses the dot-source operator while exporting functions using wildcard characters, and this is disallowed when the system is under application verification enforcement.' # But importing using the module manifest succeeds since the manifest explicitly exports functions by name without wildcards Import-Module TestModule Get-Module -Name TestModule | Select -ExpandProperty ExportedFunctions Invoke-Program Module Function Export Best Practices

Best practices for module function exporting is pretty simple. Always export module functions explicitly by name. Never export using wild card names. This will yield the best performance and ensure you don’t expose functions you don’t intend to expose. It makes your module safer to use as trusted in a DeviceGuard policy enforcement environment.

# TestModule.psm1 Import-Module HelperMod1 . .CSharpHelpers.ps1 function Invoke-Program { } # TestModule.psd1 @ { ModuleVersion='1.0'; RootModule='TestModule.psm1'; FunctionsToExport='Invoke-Program' } Get-Module -Name TestModule -List | Select -ExpandProperty ExportedFunctions Invoke-Program

Paul Higinbotham
Senior Software Engineer
PowerShell Team

Categories: Latest Microsoft News

Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Latest Oracle Press Releases - Tue, 08/14/2018 - 13:09
Press Release Federal Court Rules that Oracle Is Entitled to a Permanent Injunction Against Rimini Street and Awards Attorneys' Fees in Copyright Suit

Redwood Shores, Calif.—Aug 14, 2018

Today, for the second time, a Federal Court in Nevada granted Oracle's motion for a permanent injunction against Rimini Street for years of infringement of Oracle’s copyrights. In an opinion notable for its strong language condemning Rimini Street’s actions, the Court made clear that since its inception, Rimini’s business “was built entirely on its infringement of Oracle’s copyrighted software.” The Court also highlighted Rimini's “conscious disregard” for Oracle's copyrights and Rimini's “significant litigation misconduct” in granting Oracle's motion for its attorneys' fees to be paid.

“As the Court's Order today makes clear, Rimini Street's business has been built entirely on unlawful conduct, and Rimini's executives have repeatedly lied to cover up their company's illegal acts. Rimini, which admits that it is the subject of an ongoing federal criminal investigation, has proven itself to be disreputable, and it seems very clear that it will not stop its unlawful conduct until a Court orders it to stop. Oracle is grateful that today the Federal Court in Nevada did just that,” said Dorian Daley, Oracle's Executive Vice President and General Counsel.

The Court noted that it was Rimini's brazen misconduct that enabled it to "rapidly build" its infringing business, while at the same time irreparably damaging Oracle because Rimini's very business model “eroded the bonds and trust that Oracle has with its customers.” It also stressed that for over five years of litigation, “literally up until trial”, Rimini Street denied the allegations of infringement. At trial, however, Rimini CEO Seth Ravin, who was also a defendant, changed his story and admitted for the first time that Rimini Street did in fact engage in all the infringing activities that Oracle had identified.

Finally, the Court declared that over $28M in attorneys’ fees should be awarded to Oracle because of Rimini Street’s significant litigation misconduct in this action. Rimini comments to the market that this award would have to be returned to Rimini have proven to be false and misleading, like so many of its actions and assurances to customers and others.

Contact Info Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Follow Oracle Corporate

Categories: Latest Oracle News

The Microsoft AI Idea Challenge – Breakthrough Ideas Wanted!

Latest Microsoft Data Platform News - Tue, 08/14/2018 - 12:52

This post is authored by Tara Shankar Jana, Senior Technical Product Marketing Manager at Microsoft.

All of us have creative ideas – ideas that can improve our lives and the lives of thousands, perhaps even millions of others. But how often do we act on turning those ideas into a reality? Most of the time, we do not believe in our ideas strongly enough to pursue them. Other times we feel like we lack a platform to build out our idea or showcase it. Most good ideas don’t go beyond those initial creative thoughts in our head.

If you’re a professional working in the field of artificial intelligence (AI), or an aspiring AI developer or just someone who is passionate about AI and machine learning, Microsoft is excited to offer you an opportunity to transform your most creative ideas into reality. Join the Microsoft AI Idea Challenge Contest today for a chance to win exciting prizes and get your project featured in Microsoft’s AI.lab showcase. Check out the rules, terms and conditions of the contest and then dive right in!

The Challenge

The Microsoft AI Idea Challenge is seeking breakthrough AI solutions from developers, data scientists, professionals and students, and preferably developed on the Microsoft AI platform and services. The challenge gives you a platform to freely share AI models and applications, so they are reusable and easily accessible. The ideas you submit are judged on the parameters shown in the figure below – essentially half the weight is for the originality of your idea, 20% for the feasibility of your solution, and 30% for the complexity (i.e. level of sophistication) of your implementation.

The Microsoft AI Challenge is accepting submissions between now and October 12th, 2018.

To qualify for the competition, individuals or teams are required to submit a working AI model, test dataset, a demo app and a demo video that can be a maximum of three minutes long. We encourage you to register early and upload your projects soon, so that you can begin to plan and build out your solution and turn in the rest of your materials on time. We are looking for solutions across the whole spectrum of use cases – to be inspired, take a look at some of the examples at AI.lab.

Prizes

The winners of the first three places in the contest will respectively receive a Surface Book 2, a DJI Drone, and an Xbox One X.

We hope that’s motivation to get you started today – good luck!

Tara

Categories: Latest Microsoft News

Microsoft R Open 3.5.1 now available

Latest Microsoft Data Platform News - Tue, 08/14/2018 - 10:33

Microsoft R Open 3.5.1 has been released, combining the latest R language engine with multi-processor performance and tools for managing R packages reproducibly. You can download Microsoft R Open 3.5.1 for Windows, Mac and Linux from MRAN now. Microsoft R Open is 100% compatible with all R scripts and packages, and works with all your favorite R interfaces and development environments.

This update brings a number of minor fixes to the R language engine from the R core team. It also makes available a host of new R packages contributed by the community, including packages for downloading financial data, connecting with analytics systems, applying machine learning algorithms and statistical models, and many more. New R packages are released every day, and you can access packages released after the 1 August 2018 CRAN snapshot used by MRO 3.5.1 using the checkpoint package.

We hope you find Microsoft R Open useful, and if you have any comments or questions please visit the Microsoft R Open forum. You can follow the development of Microsoft R Open at the MRO Github repository. To download Microsoft R Open, simply follow the link below.

MRAN: Download Microsoft R Open

Categories: Latest Microsoft News

Python in Visual Studio 2017 version 15.8

Latest Microsoft Data Platform News - Tue, 08/14/2018 - 09:54

We have released the 15.8 update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to look at some of the new features we have added for Python developers: IntelliSense with type shed definitions, faster debugging, and support for Python 3.7. For a list of all changes in this release, check out the Visual Studio release notes.

Faster debugging, on by default

We first released a preview of our ptvsd 4.0 debug engine in the 15.7 release of Visual Studio, in the 15.8 release this is now the default, offering faster and more reliable debugging for all users.

If you encounter issues with the new debug engine, you can revert back to the previous debug engine by selecting Use legacy debugger from Tools > Options > Python > Debugging.

Richer IntelliSense

We are continuing to make improvements to IntelliSense for Python in Visual Studio 2017. In this release you will notice completions that are faster, more reliable, and have better understanding of the surrounding code, and tooltips with more focused and useful information. Go To Definition and Find All References are better at taking you to the module a value was imported from, and Python packages that include type annotations will provide richer completions. These changes were made as part of our ongoing effort to make our Python analysis from Visual Studio available as an independent Microsoft Python Language Server.

As an example, below shows improved tooltips with richer information when hovering over the os module in 15.8 compared to 15.7:

We have also added initial support for using typeshed definitions to provide more completions for places where our static analysis is unable to infer complete information. We are still working through some known issues with this though, so results may be limited and expect to see better support for typeshed in future releases.

Support for Python 3.7

We have updated our Visual Studio so that all of our features work with Python 3.7, which was recently released. Most functionality of Visual Studio works with Python 3.7 in the 15.7 release, and in the 15.8 release we made specific fixes so that debug attach, profiling, and mixed-mode (cross-language) debugging features work with Python 3.7.

Give Feedback

Be sure to download the latest version of Visual Studio and try out the above improvements. If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our GitHub page. Follow our Python blog to make sure you hear about our updates first, and thank you for using Visual Studio!

Categories: Latest Microsoft News

Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks

Latest Oracle Press Releases - Tue, 08/14/2018 - 08:00
Press Release Start-up Uses Oracle Cloud to Launch AI-Powered Hub for Social Networks Virtual Artifacts’ new Hibe hub connects diverse mobile apps, enabling consumers to stick with their social platform of choice

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced Virtual Artifacts has launched its mobile application network, Hibe, with Oracle Cloud. The company has developed Hibe as a new social network for mobile applications that lets consumers communicate with each other from their social platform of choice. To prepare for rapid growth, Virtual Artifacts invested in Oracle Cloud, including Oracle Autonomous Database, Oracle Cloud Platform, and Oracle Cloud Infrastructure.

Hibe helps different mobile applications to seamlessly communicate with each other without fear of data or privacy spillage. Built on an AI-driven proprietary privacy engine, users are able to easily connect, interact, and transact together, each from their own favorite communication tools. The new network removes the inconvenience of having to switch between applications by keeping communications synched and in one place.

"With 4.7 billion users on 6 million mobile apps, our ability to grow our database exponentially and globally was critical,” said Stéphane Lamoureux, COO, Virtual Artifacts. “We wanted a cloud provider that shares our values on privacy, is agile, and can work closely with us to build specific solutions. Oracle was the only vendor that could meet all of our requirements. With Oracle Autonomous Database, we no longer need to worry about manual, time-intensive tasks, such as database patching, upgrading, and tuning.”

With Oracle Autonomous Database, the industry’s first self-driving, self-securing, and self-repairing database, Virtual Artifacts can avoid the complexities of database configuration, tuning, and administration and instead focus on innovation. Oracle Autonomous Data Warehouse Cloud provides an easy-to-set-up and use data store for Hibe data, enabling Virtual Artifacts to better understand customer behavior.

The Hibe platform uses a number of Oracle Cloud Platform and Infrastructure services to support its current operations and anticipated growth. Using Oracle Mobile Cloud’s AI-driven intelligent bot on Hibe, Virtual Artifacts will be able to answer developers’ questions and make it easy for app providers to access the platform. The integration of Oracle’s mobile applications and chatbot technology on Hibe will also enable customers to directly engage with consumers, making it possible to quickly understand key audiences and improve the mobile experience.

Additionally, Virtual Artifacts will work with Oracle to develop the Hibe Marketplace, including an advertising engine to match items with interested consumers connected to Hibe through their respective mobile applications. The Hibe platform will also host and distribute content. By using Oracle’s blockchain platform, Virtual Artifacts and other content contributors on the platform will be able to track usage, distribution, and copyright attribution to help ensure it is being used and credited properly.

“The launch of the Hibe platform showcases the capabilities of Oracle’s Autonomous Cloud,” said Christopher Chelliah, group vice president and chief architect, Oracle Asia Pacific. “We essentially become the back end globally, leaving Virtual Artifacts to concentrate on the Hibe platform. This means that as a native cloud startup, we are supporting Virtual Artifacts from zero-to-production, and beyond. The combination of the strength of Oracle Cloud and Virtual Artifacts’ innovative privacy engine has the potential to revolutionize the app-to-app ecosystem.”

Contact Info Dan Muñoz
Oracle
650.506.2904
dan.munoz@oracle.com Jesse Caputo
Oracle
650.506.5967
jesse.caputo@oracle.com About Virtual Artifacts

Based in Montreal, Canada, Virtual Artifacts is committed to deliver privacy-centric technologies that bridges the gap between online and real-life interactions and fundamentally transform how individuals, brands, and organizations meet, connect, communicate and transact online. These proprietary technologies both protect individual privacy and  disrupt many online industries, including ecommerce, advertising, shipping, mobile and the manner in which  brands manage key relationships with customers. Over the last 10 years, the company has developed decision-making AI engines and a centralized blockchain-like data structure to host, contextualize, distribute and ensure confidentiality of online activities powered by the company’s technologies. For more information about Virtual Artifacts, please visit us at www.virtualartifacts.com and www.hibe.com

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • 650.506.2904

Jesse Caputo

  • 650.506.5967

Follow Oracle Corporate

Categories: Latest Oracle News

Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management

Latest Oracle Press Releases - Tue, 08/14/2018 - 05:00
Press Release Oracle Recognized as a Leader in the 2018 Gartner Magic Quadrant for Web Content Management For the second consecutive year, Oracle positioned as a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Aug 14, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Web Content Management” report1 for the second consecutive year. The recognition is further validation of Oracle’s continued leadership in this category and unique approach to helping enterprises streamline content management and build immersive digital experiences for users.

“As the world increasingly adopts internet and mobile-first strategies, the ability to create a cohesive, seamless experience across an enterprise’s properties is critical,” said Amit Zavery, executive vice president, Oracle Cloud Platform. “Customers recognize the importance of their content and the benefits of Oracle Content and Experience Cloud, including its expansive features and ability to easily manage content and deliver exceptional user experiences across all platforms.”

According to Gartner, “Leaders should drive market transformation. They have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can:

  • Demonstrate enterprise deployments
  • Offer integration with other business applications and content repositories
  • Support multiple vertical and horizontal contexts”

Oracle Content and Experience Cloud is the modern platform for driving omni-channel content and workflow management as well as rich experiences to optimize customer journeys, and partner and employee engagement. It is the only solution of its kind available with out-of-the-box integrations with Oracle on-premises and SaaS solutions. With Oracle Content and Experience Cloud, organizations can enable content collaboration, deliver consistent experiences across online properties, and manage media-centric content storage all within one central content hub. In addition, Oracle’s capabilities extend beyond the typical role of content management, providing low-code development tools for building digital experiences that leverage a service catalog of data connections.

Access a complimentary copy of Gartner’s 2018 “Magic Quadrant for Web Content Management” here.

1. Gartner, “Magic Quadrant for Web Content Management,” by Mick MacComascaigh, Jim Murphy, July 30, 2018

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com Jesse Caputo
Oracle
+1.650.506.5697
jesse.caputo@oracle.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Jesse Caputo

  • +1.650.506.5697

Follow Oracle Corporate

Categories: Latest Oracle News

Oracle Helps Organizations Manage International Trade More Efficiently

Latest Oracle Press Releases - Tue, 08/14/2018 - 04:30
Press Release Oracle Helps Organizations Manage International Trade More Efficiently New releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud help customers reduce costs, streamline trade regulatory compliance, and accelerate customer fulfillment

Redwood Shores, Calif.—Aug 14, 2018

To help customers succeed in an increasingly complex global economy, Oracle today introduced new releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud. The new releases enable customers who seek to reduce costs, streamline compliance with global trade regulations, and accelerate customer fulfillment, to manage these significant needs on a single, integrated platform.

The latest releases of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud provide real-time, data-driven insights into shipment routes and automated event handling. They also offer expanded regulatory support for fast, accurate screening and customs declarations. For example, Nahdi Medical Company has been able to improve truck utilization by five to ten percent since implementing the latest release of Oracle Transportation Management.

“We now have greater visibility into shipment demands and delivery status, which means we have the flexibility to quickly re-do plans as new requirements arise,” said Sayed Al-Sayed, supply chain applications manager, Nahdi Medical Company. “The accuracy of Oracle Transportation Management Cloud’s recommended plans and route optimizations has enabled us to increase truck utilization rates, which has allowed us to cut time on shipments, while also saving money.”

Enhancements to Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud include:

  • Enhanced routing: Enables customers to make better decisions when routing shipments by accounting for factors such as historic traffic patterns, hazardous materials, and tolls when planning shipments

  • Expanded regulatory support: Enables customers to better meet their global trade needs in today’s dynamically changing, regulatory environment by supporting expanded regulatory content, allowing more accurate screening, and providing enhancements for summary customs declarations

  • Advanced planning: Enables customers to better map out and improve transportation planning in end-to-end, outbound order fulfillment by providing enhancements to sample integrated flows with Oracle Order Management Cloud and Oracle Inventory Management Cloud

  • Driver-oriented features: Enables customers to improve the handling of assignments for shift-based drivers by providing enhanced support for planning and execution of private and dedicated fleets, including driver-oriented workflow in the OTM Mobile App

  • IoT Fleet Monitoring integration: Enables customers to have real-time visibility into a shipment’s location through automatic event handling

  • UX and collaboration: Enables customers to have a simpler and more configurable user interface, including integration with Oracle Content and Experience Cloud, which streamlines collaboration with peers

  • Graphical diagnostics tool: Enables customers to easily research and resolve questions in real time that may arise from the shipment planning processes by providing them with a visual diagnostic tool, which highlights areas that warrant further investigation

“The global trade and logistics landscape is dramatically changing and organizations need to be able to accommodate a complex web of regulations and customer expectations for each shipment in order to be competitive,” said Derek Gittoes, vice president, SCM Product Strategy, Oracle. “The combination of Oracle Transportation Management Cloud and Oracle Global Trade Management Cloud gives our customers an innovative platform to reduce complexity and effectively improve the efficiency of their global trade compliance and transport operations.”

The enhancements further extend Oracle’s leadership position in the transportation management category. Oracle Transportation Management Cloud provides a single platform for transportation orchestration across the entire supply chain. Oracle Global Trade Management Cloud offers unparalleled visibility and control over orders and shipments to reduce costs, automate processes, streamline compliance, and improve service delivery.

Contact Info Vanessa Johnson
Oracle
+1650.607.1692
vanessa.n.johnson@oracle.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle. 

Talk to a Press Contact

Vanessa Johnson

  • +1650.607.1692

Follow Oracle Corporate

Categories: Latest Oracle News

Hyper-V HyperClear Mitigation for L1 Terminal Fault

Latest Microsoft Server Management News - Tue, 08/14/2018 - 03:00
Introduction

A new speculative execution side channel vulnerability was announced recently that affects a range of Intel Core and Intel Xeon processors. This vulnerability, referred to as L1 Terminal Fault (L1TF) and assigned CVE 2018-3646 for hypervisors, can be used for a range of attacks across isolation boundaries, including intra-OS attacks from user-mode to kernel-mode as well as inter-VM attacks. Due to the nature of this vulnerability, creating a robust, inter-VM mitigation that doesn’t significantly degrade performance is particularly challenging.

For Hyper-V, we have developed a comprehensive mitigation to this attack that we call HyperClear. This mitigation is in-use by Microsoft Azure and is available in Windows Server 2016 and later. The HyperClear mitigation continues to allow for safe use of SMT (hyper-threading) with VMs and, based on our observations of deploying this mitigation in Microsoft Azure, HyperClear has shown to have relatively negligible performance impact.

We have already shared the details of HyperClear with industry partners. Since we have received questions as to how we are able to mitigate the L1TF vulnerability without compromising performance, we wanted to broadly share a technical overview of the HyperClear mitigation and how it mitigates L1TF speculative execution side channel attacks across VMs.

Overview of L1TF Impact to VM Isolation

As documented here, the fundamental premise of the L1TF vulnerability is that it allows a virtual machine running on a processor core to observe any data in the L1 data cache on that core.

Normally, the Hyper-V hypervisor isolates what data a virtual machine can access by leveraging the memory address translation capabilities provided by the processor. In the case of Intel processors, the Extended Page Tables (EPT) feature of Intel VT-x is used to restrict the system physical memory addresses that a virtual machine can access.

Under normal execution, the hypervisor leverages the EPT feature to restrict what physical memory can be accessed by a VM’s virtual processor while it is running. This also restricts what data the virtual processor can access in the cache, as the physical processor enforces that a virtual processor can only access data in the cache corresponding to system physical addresses made accessible via the virtual processor’s EPT configuration.

By successfully exploiting the L1TF vulnerability, the EPT configuration for a virtual processor can be bypassed during the speculative execution associated with this vulnerability. This means that a virtual processor in a VM can speculatively access anything in the L1 data cache, regardless of the memory protections configured by the processor’s EPT configuration.

Intel’s Hyper-Threading (HT) technology is a form of Simultaneous MultiThreading (SMT). With SMT, a core has multiple SMT threads (also known as logical processors), and these logical processors (LPs) can execute simultaneously on a core. SMT further complicates this vulnerability, as the L1 data cache is shared between sibling SMT threads of the same core. Thus, a virtual processor for a VM running on a SMT thread can speculatively access anything brought into the L1 data cache by its sibling SMT threads. This can make it inherently unsafe to run multiple isolation contexts on the same core. For example, if one logical processor of a SMT core is running a virtual processor from VM A and another logical processor of the core is running a virtual processor from VM B, sensitive data from VM B could be seen by VM A (and vice-versa).

Similarly, if one logical processor of a SMT core is running a virtual processor for a VM and the other logical processor of the SMT core is running in the hypervisor context, the guest VM could speculatively access sensitive data brought into the cache by the hypervisor.

Basic Inter-VM Mitigation

To mitigate the L1TF vulnerability in the context of inter-VM isolation, the most straightforward mitigation involves two key components:

  1. Flush L1 Data Cache On Guest VM Entry – Every time the hypervisor switches a processor thread (logical processor) to execute in the context of a guest virtual processor, the hypervisor can first flush the L1 data cache. This ensures that no sensitive data from the hypervisor or previously running guest virtual processors remains in the cache. To enable the hypervisor to flush the L1 data cache, Intel has released updated microcode that provides an architectural facility for flushing the L1 data cache.
  2. Disable SMT – Even with flushing the L1 data cache on guest VM entry, there is still the risk that a sibling SMT thread can bring sensitive data into the cache from a different security context. To mitigate this, SMT can be disabled, which ensures that only one thread ever executes on a processor core.

The L1TF mitigation for Hyper-V prior to Windows Server 2016 employs a mitigation based on these components. However, this basic mitigation has the major downside that SMT must be disabled, which can significantly reduce the overall performance of a system. Furthermore, this mitigation can result in a very high rate of L1 data cache flushes since the hypervisor may switch a thread between the guest and hypervisor contexts many thousands of times a second. These frequent cache flushes can also degrade the performance of the system.

HyperClear Inter-VM Mitigation

To address the downsides of the basic L1TF Inter-VM mitigation, we developed the HyperClear mitigation. The HyperClear mitigation relies on three key components to ensure strong Inter-VM isolation:

  1. Core Scheduler
  2. Virtual-Processor Address Space Isolation
  3. Sensitive Data Scrubbing
Core Scheduler

The traditional Hyper-V scheduler operates at the level of individual SMT threads (logical processors). When making scheduling decisions, the Hyper-V scheduler would schedule a virtual processor onto a SMT thread, without regards to what the sibling SMT threads of the same core were doing. Thus, a single physical core could be running virtual processors from different VMs simultaneously.

Starting in Windows Server 2016, Hyper-V introduced a new scheduler implementation for SMT systems known as the “Core Scheduler“. When the Core Scheduler is enabled, Hyper-V schedules virtual cores onto physical cores. Thus, when a virtual core for a VM is scheduled, it gets exclusive use of a physical core, and a VM will never share a physical core with another VM.

With the Core Scheduler, a VM can safely take advantage of SMT (Hyper-Threading). When a VM is using SMT, the hypervisor scheduling allows the VM to use all the SMT threads of a core at the same time.

Thus, the Core Scheduler provides the essential protection that a VM’s data won’t be directly disclosed across sibling SMT threads. It protects against cross-thread data exposure of a VM since two different VMs never run simultaneously on different threads of the same core.

However, the Core Scheduler alone is not sufficient to protect against all forms of sensitive data leakage across SMT threads. There is still the risk that hypervisor data could be leaked across sibling SMT threads.

Virtual-Processor Address Space Isolation

SMT Threads on a core can independently enter and exit the hypervisor context based on their activity. For example, events like interrupts can cause a SMT thread to switch out of running the guest virtual processor context and begin executing the hypervisor context. This can happen independently for each SMT thread, so one SMT thread may be executing in the hypervisor context while its sibling SMT thread is still running a VM’s guest virtual processor context. An attacker running code in the less trusted guest VM virtual processor context on one SMT thread can then use the L1TF side channel vulnerability to potentially observe sensitive data from the hypervisor context running on the sibling SMT thread.

One potential mitigation to this problem is to coordinate hypervisor entry and exit across SMT threads of the same core. While this is effective in mitigating the information disclosure risk, this can significantly degrade performance.

Instead of coordinating hypervisor entry and exits across SMT threads, Hyper-V employs strong data isolation in the hypervisor to protect against a malicious guest VM leveraging the L1TF vulnerability to observe sensitive hypervisor data. The Hyper-V hypervisor achieves this isolation by maintaining separate virtual address spaces in the hypervisor for each guest SMT thread (virtual processor). When the hypervisor context is entered on a specific SMT thread, the only data that is addressable by the hypervisor is data associated with the guest virtual processor associated with that SMT thread. This is enforced through the hypervisor’s page table selectively mapping only the memory associated with the guest virtual processor. No data for any other guest virtual processor is addressable, and thus, the only data that can be brought into the L1 data cache by the hypervisor is data associated with that current guest virtual processor.

Thus, regardless of whether a given virtual processor is running in the guest VM virtual processor context or in the hypervisor context, the only data that can be brought into the cache is data associated with the active guest virtual processor. No additional privileged hypervisor secrets or data from other guest virtual processors can be brought into the L1 data cache.

This strong address space isolation provides two distinct benefits:

  1. The hypervisor does not need to coordinate entry and exits into the hypervisor across sibling SMT threads. So, SMT threads can enter and exit the hypervisor context independently without any additional performance overhead.
  2. The hypervisor does not need to flush the L1 data cache when entering the guest VP context from the hypervisor context. Since the only data that can be brought into the cache while executing in the hypervisor context is data associated with the guest virtual processor, there is no risk of privileged/private state in the cache that needs to be protected from the guest. Thus, with this strong address space isolation, the hypervisor only needs to flush the L1 data cache when switching between virtual cores on a physical core. This is much less frequent than the switches between the hypervisor and guest VP contexts.
Sensitive Data Scrubbing

There are cases where virtual processor address space isolation is insufficient to ensure isolation of sensitive data. Specifically, in the case of nested virtualization, a single virtual processor may itself run multiple guest virtual processors. Consider the case of a L1 guest VM running a nested hypervisor (L1 hypervisor). In this case, a virtual processor in this L1 guest may be used to run nested virtual processors for L2 VMs being managed by the L1 nested hypervisor.

In this case, the nested L1 guest hypervisor will be context switching between each of these nested L2 guests (VM A and VM B) and the nested L1 guest hypervisor. Thus, a virtual processor for the L1 VM being maintained by the L0 hypervisor can run multiple different security domains – a nested L1 hypervisor context and one or more L2 guest virtual machine contexts. Since the L0 hypervisor maintains a single address space for the L1 VM’s virtual processor, this address space could contain data for the nested L1 guest hypervisor and L2 guests VMs.

To ensure a strong isolation boundary between these different security domains, the L0 hypervisor relies on a technique we refer to as state scrubbing when nested virtualization is in-use. With state scrubbing, the L0 hypervisor will avoid caching any sensitive guest state in its data structures. If the L0 hypervisor must read guest data, like register contents, into its private memory to complete an operation, the L0 hypervisor will overwrite this memory with 0’s prior to exiting the L0 hypervisor context. This ensures that any sensitive L1 guest hypervisor or L2 guest virtual processor state is not resident in the cache when switching between security domains in the L1 guest VM.

For example, if the L1 guest hypervisor accesses an I/O port that is emulated by the L0 hypervisor, the L0 hypervisor context will become active. To properly emulate the I/O port access, the L0 hypervisor will have to read the current guest register contents for the L1 guest hypervisor context, and these register contents will be copied to internal L0 hypervisor memory. When the L0 hypervisor has completed emulation of the I/O port access, the L0 hypervisor will overwrite any L0 hypervisor memory that contains register contents for the L1 guest hypervisor context. After clearing out its internal memory, the L0 hypervisor will resume the L1 guest hypervisor context. This ensures that no sensitive data stays in the L0 hypervisor’s internal memory across invocations of the L0 hypervisor context. Thus, in the above example, there will not be any sensitive L1 guest hypervisor state in the L0 hypervisor’s private memory. This mitigates the risk that sensitive L1 guest hypervisor state will be brought into the data cache the next time the L0 hypervisor context becomes active.

As described above, this state scrubbing model does involve some extra processing when nested virtualization is in-use. To minimize this processing, the L0 hypervisor is very careful in tracking when it needs to scrub its memory, so it can do this with minimal overhead. The overhead of this extra processing is negligible in the nested virtualization scenarios we have measured.

Finally, the L0 hypervisor state scrubbing ensures that the L0 hypervisor can efficiently and safely provide nested virtualization to L1 guest virtual machines. However, to fully mitigate inter-VM attacks between L2 guest virtual machines, the nested L1 guest hypervisor must implement a mitigation for the L1TF vulnerability. This means the L1 guest hypervisor needs to appropriately manage the L1 data cache to ensure isolation of sensitive data across the L2 guest virtual machine security boundaries. The Hyper-V L0 hypervisor exposes the appropriate capabilities to L1 guest hypervisors to allow L1 guest hypervisors to perform L1 data cache flushes.

Conclusion

By using a combination of core scheduling, address space isolation, and data clearing, Hyper-V HyperClear is able to mitigate the L1TF speculative execution side channel attack across VMs with negligible performance impact and with full support of SMT.

Categories: Latest Microsoft News

Tick Tock, Time to Catch Up: What’s New with Windows Time in Windows Server 2019

Latest Microsoft Server Management News - Mon, 08/13/2018 - 16:43

Stay a while and….oh wait wrong place. Welcome! My name is Tim Medina, Senior PFE with Microsoft and today we are going to look at what’s new with Windows Time for Server 2019 (part 1 of a 3 part series). As with everything, time has marched on and we are looking forward with the way we provide to time services to your environments. We did release some information on what’s to come here.

So, building on the introduction of highly accurate time, we have implemented this to use the same configuration and gotten some of that space down to the 1s, 50s, and 1ms accuracy. This has a high impact on time sensitive businesses and requirements. It should also be noted that it may extend to normal operations and keeping control of ticket lifetimes to the millisecond can further control of organizations identities.

As with previous releases, the configuration can be controlled via the registry, time commands, and Group Policy. The extensions have been made for the configuration of highly accurate time in the registry as well. Also note there are some requirements and restrictions.

We have also worked inside the service controls to reduce the possible impact of using the SpecialPollInterval to work in conjunction with high accuracy requirements. This should move the flag controls as well to a more uniform usage.

All and all we are continuing the work we started in Server 2016 and moving things forward from there. Next blog we will be doing a dive into the configuration paths outlined above. Then to wrap things up we will look into the deeper functions and flows inside the service. So as the bell tolls, we will see you next time.

 

For some extra information, you can also take a look at the product group’s post on this topic over at https://blogs.technet.microsoft.com/networking/2018/07/18/top10-ws2019-hatime/!

 

Categories: Latest Microsoft News

OSD Video Tutorial: Part 12 – OSD and the ADK

Latest Microsoft Server Management News - Mon, 08/13/2018 - 16:04

This session is part twelve of an ongoing series focusing on Operating System Deployment in Configuration Manager. We are taking a pause in our more traditional OSD discussion and looping back to the Windows Automated Deployment Kit (ADK).  Topics center around how the ADK is leveraged by OSD with demonstrations of image installation and modification using just the ADK tools.

The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies.

Next in the series, Steven will discuss the concept of Known and Unknown computer imaging.

Posts in OSD - A Deeper Dive

Go straight to the Deeper Dive playlist

OSD Video Tutorial Overview

Categories: Latest Microsoft News

Managed Services

If you do not understand your computer systems or your organization has computer maintenance needs, contact us and we'll schedule a free consultation to offload the computer headaches from you.

View details »

Database Administration

Need some assistance with a database or database maintenance? We can help with that and anything else related to databases.

View details »

Virtualization

Not sure how this could help you? Let us explain the benefits of virtualization and be sure you are using your hardware properly.

View details »