Feed aggregator

Because it’s Friday: How Bitcoin works

Latest Microsoft Data Platform News - Fri, 07/21/2017 - 15:10

Cryptocurrencies have been in the news quite a bit lately. Bitcoin prices have been soaring recently after the community narrowly avoided the need for a fork, while $32M in rival currency Etherium was recently stolen, thanks to a coding error in wallet application Purity. But what is a crypto-currency, and what does a "wallet" or a "fork" mean in that context? The video below gives the best explanation I've seen for how cryptocurrencies work. It's 25 minutes long, but it's a complex and surprisingly subtle topic, made easy to understand by math explainer channel 3Blue1Brown.

That's all from the blog for this week. Have a great weekend, and we'll be back on Monday.

Categories: Latest Microsoft News

IEEE Spectrum 2017 Top Programming Languages

Latest Microsoft Data Platform News - Fri, 07/21/2017 - 11:41

IEEE Spectrum has published its fourth annual ranking of of top programming languages, and the R language is again featured in the Top 10. This year R ranks at #6, down a spot from its 2016 ranking (and with an IEEE score — derived from search, social media, and job listing trends — tied with the #5 place-getter, C#). Python has taken the #1 slot from C, jumping from its #3 ranking in 2016.

For R (a domain specific language for data science) to rank in the top 10, and for Python (a general-purpose language with many data science applications) to take the top spot, may seem like a surprise. I attribute this to continued broad demand for machine intelligence application development, driven by the growth of "big data" initiatives and the strategic imperative to capitalize on these data stores by companies wordwide. Other data-oriented languages appear in the Top 50 rankings, including Matlab (#15), SQL (#23), Julia (#31) and SAS (#37).

For the complete announcement of the 2017 IEEE Spectrum rankings, including additional commentary and analysis of changes, follow the link below.

IEEE Spectrum: The 2017 Top Programming Languages

 

 

Categories: Latest Microsoft News

Partners: Thanks for joining us at Microsoft Inspire!

Latest Microsoft Server Management News - Thu, 07/20/2017 - 14:45

Last week in Washington D.C., we held Microsoft Inspire, our premier annual partner event. This years event was a huge success, with over 17,000 attendees joining us from 140 countries. A big thank you to all of the Enterprise Mobility + Security partners who came out to spend the week with us!

As we mentioned in our preview of Inspire, there were eight Enterprise Mobility + Security sessions at the event. For those of you that couldnt make it, these sessions can be viewed on-demand here. We had many engaging conversations with our partners throughout the event, and there were a few key topics that we heard repeatedly:

  1. Excitement around Microsoft 365. During Monday mornings keynote, Satya Nadella unveiled Microsoft 365, a new offering that brings together Office 365, Windows 10, and Enterprise Mobility + Security. The EMS partners that we spoke to were very excited about the potential of Microsoft 365, and were eager to learn more about the offering and start having conversations with their customers about how Microsoft 365 can help empower their digital transformations. Take a look at our recent blog post on EMS and Microsoft 365 to learn more.
  2. Security is top of mind. Many partners we spoke to were very enthusiastic to work with Microsoft to help keep our joint customers secure. Particularly, partners were interested in learning about One Microsoft Security, our unique approach to security and better understanding the power of the Microsoft Intelligent Security Graph. Partners left the event excited to discuss the power of the graph with their customers. Download the Security Practice Playbook to learn how to grow your security practice and transform your business.
  3. GDPR and Compliance. One item that came up repeatedly in conversations with our partners was the importance of GDPR and compliance. The GDPR enforcement date is now less than a year away, and many of our partners felt that their customers still had a lot of work to do to comply. Check out our GDPR Partner site to learn more about how can you partner with us to solve many of your customers GDPR and compliance needs.

Next year, Microsoft Inspire will be held in fabulous Las Vegas, Nevada. Register now to get your All Access pass. Well see you there!

Categories: Latest Microsoft News

Windows Server 2016 NTFS sparse file/Data Deduplication users: please install KB4025334

Latest Microsoft Server Management News - Thu, 07/20/2017 - 10:40

Hi folks,

KB4025334 prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016. This helps avoid data corruptions that may occur when using Data Deduplication in Windows Server 2016, although all applications and Windows components that use sparse files on NTFS benefit from applying this update. Installation of this KB helps avoid any new or further corruptions for Data Deduplication users on Windows Server 2016. This does not help recover existing corruptions that may have already happened. This is because NTFS incorrectly removes in-use clusters from the file and there is no ability to identify what clusters were incorrectly removed after the fact. Although KB4025334 is an optional update, we strongly recommend that all NTFS users, especially those using Data Deduplication, install this update as soon as possible. This fix will become mandatory in the “Patch Tuesday” release for August 2017.

For Data Deduplication users, this data corruption is particularly hard to notice as it is a so called “silent” corruption – it cannot be detected by the weekly Dedup integrity scrubbing job. Therefore, KB4025334 also includes an update to chkdsk to help identify which files are corrupted. Affected files can be identified using chkdsk with the following steps:

  1. Install KB4025334 on your server from the Microsoft Update Catalog and reboot. If you are running a Failover Cluster, this patch will need to be applied to all nodes in the cluster.
  2. Run chkdsk in readonly mode (this is the default mode for chkdsk)
  3. For potentially corrupted files, chkdsk will report something like the following
    The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect.

    where 20000000000f3 is the file id. Note all affected file ids.
  4. Use fsutil to look up the name of the file by its file id. This should look like the following:

    E:myfolder> fsutil file queryfilenamebyid e: 0x20000000000f3
    A random link name to this file is [file://%3f/E:/myfolder/TEST.0]\?E:myfolderTEST.0

    where E:myfolderTEST.0 is the affected file.

We’re very sorry for the inconvenience this issue has caused. Please don’t hesitate to reach out in the comment section below if you have any additional questions about KB4025334, and we’ll be happy to answer.

Categories: Latest Microsoft News

Data Analysis for Life Sciences

Latest Microsoft Data Platform News - Thu, 07/20/2017 - 08:00

Rafael Irizarry from the Harvard T.H. Chan School of Public Health has presented a number of courses on R and Biostatistics on EdX, and he recently also provided an index of all of the course modules as YouTube videos with supplemental materials. The EdX courses are linked below, which you can take for free, or simply follow the series of YouTube videos and materials provided in the index. 

Data Analysis for the Life Sciences Series 

A companion book and associated R Markdown documents are also available for download.

Genomics Data Analysis Series

For links to all of the course components, including videos and supplementary materials, follow the link below.

rafalab: HarvardX Biomedical Data Science Open Online Training

Categories: Latest Microsoft News

Demo: Identify and fix plan change regression in SQL Server 2017 RC1

Latest Microsoft Data Platform News - Thu, 07/20/2017 - 04:52

Plan change regression happens when SQL Database changes a plan for some T-SQL query, and the new plan has the worse performance than the previous one. SQL Server 2017 has Automatic Tuning feature that enables you to easily find plan change regressions and fix them. In this post you will see the demo script that you can use to cause plan change regression and manually fix it using new sys.dm_db_tuning_recommendations view.

If you are not familiar with plan regressions and new tuning recommendations in SQL Server 2017, I would recommend to read these two posts:

This would be enough to understand steps in this demo.

Setup

First we would need a table where we will execute a query that will cause plan change regression. In the previous post (What is plan regression in SQL Server?), I have created a demo that fills a table with data and cause plan regression. I will use the same table in this demo:

drop table if exists flgp; create table flgp (        type int,        name nvarchar(200), index ncci nonclustered columnstore (type), index ix_type(type) ); insert into flgp(type, name) values (1, 'Single'); insert into flgp(type, name) select TOP 999999 2 as type, o.name from sys.objects, sys.all_columns o;

Once you create this table you can use the following scripts to cause the plan change regressions.

Plan choice regression

First, I will fill the history of plan executions by executing the query 30 times. This step is required because we need to have information about the good plan in Query Store:

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE; EXECUTE sp_executesql @stmt = N'SELECT COUNT(*) FROM flgp WHERE type = @type', @params = N'@type int', @type = 2 GO 30

If you look at the execution plan for this query, you will see Hash aggregate operator.

NOTE: If you see Stream aggregate instead of Hash aggregate in the plan, for some reason SQL Server don’t thinks that 9999999 rows with type=2 is enough for this plan. In that case, add more rows with type=2 into the flgp table by repeating the following query several times:

insert into flgp(type, name) select 2 as type, o.name from sys.objects, sys.all_columns o

Now I’m clearing the procedure cache and sending the same query but with a parameter that will touch only one row:

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE; EXECUTE sp_executesql @stmt = N'SELECT COUNT(*) FROM flgp WHERE type = @type', @params = N'@type int', @type = 1

In this case, SQL Database will create a plan with Stream Aggregate that will be optimal for the small number of rows. This plan will be cached in SQL Server procedure cache and used in subsequent executions of the query.

Now, if I execute the query again with @type=2, SQL Server will use cached plan with Stream Aggregate instead of the optimal plan with Hash Aggregate:

EXECUTE sp_executesql @stmt = N'SELECT COUNT(*) FROM flgp WHERE type = @type', @params = N'@type int', @type = 2 GO 15

SQL Server needs at least 15 executions to collect enough information to compare current and previous plan.

Identifying tuning recommendation

You can open Query Store/Top Resource Consuming Queries in SSMS to find the query and plans that regressed. If you don’t know what plan regressed, it would be easier to use new sys.dm_db_tuning_recommendations view that shows queries that regressed and plans that might be used instead of the current plans. You can use the following query to take information from this view:

SELECT reason, score, script = JSON_VALUE(details, '$.implementationDetails.script'), planForceDetails.[query_id], planForceDetails.[new plan_id], planForceDetails.[recommended plan_id], estimated_gain = (regressedPlanExecutionCount+recommendedPlanExecutionCount)*(regressedPlanCpuTimeAverage-recommendedPlanCpuTimeAverage)/1000000, error_prone = IIF(regressedPlanErrorCount>recommendedPlanErrorCount, 'YES','NO') FROM sys.dm_db_tuning_recommendations CROSS APPLY OPENJSON (Details, '$.planForceDetails') WITH ( [query_id] int '$.queryId', [new plan_id] int '$.regressedPlanId', [recommended plan_id] int '$.recommendedPlanId', regressedPlanErrorCount int, recommendedPlanErrorCount int, regressedPlanExecutionCount int, regressedPlanCpuTimeAverage float, recommendedPlanExecutionCount int, recommendedPlanCpuTimeAverage float ) as planForceDetails;

This query will return information about the queries and plans that regressed and T-SQL script that you can use to fix the issue.

reason score script query_id new plan_id recommended plan_id estimated_gain error_prone Average query CPU time changed from 124.38ms to 2035.61ms 43 exec sp_query_store_force_plan @query_id = 2, @plan_id = 2 2 14 2 118.496206423913 NO

If you execute the script in the script column, previous good plan will be forced and used instead of the regressed plan. In this case, the following script will fix the regression and force the good plan:

exec sp_query_store_force_plan @query_id = 2, @plan_id = 2

If you execute query again, you would see that it runs faster.

Automatic plan correction

As an alternative for manual monitoring and correction using sp_query_store_force_plan procedure, you can let SQL Server to automatically apply recommendations whenever big performance regression happens after the plan change using the following code:

ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON );

SQL Server will take recommendations from the sys.dm_db_tuning_recommendations view, apply them, and automatically verifies that forced plan is better than the previous one.

 

Categories: Latest Microsoft News

SQL Server 2017 Reporting Services Release Candidate now available

Latest Microsoft Data Platform News - Wed, 07/19/2017 - 12:00

This week, we made available a Release Candidate of SQL Server 2017. Alongside it, we’re pleased to make available a Release Candidate of SQL Server 2017 Reporting Services.

As we described with the release of CTP 2.1, we moved Reporting Services installation from the SQL Server installer to a separate installer. This is a packaging change, not a product change; access to SQL Server Reporting Services is still included with your SQL Server license. The new installation process keeps our packages lean and enables customers to deploy and update Reporting Services with zero impact on your SQL Server deployments and databases.

(Looking for all the capabilities of SQL Server 2017 Reporting Services, plus web-based and mobile viewing of Power BI Reports? Check out Power BI Report Server, and be sure to read “A closer look at Power BI Report Server.”)

Try it now and send us your feedback
Categories: Latest Microsoft News

Oracle Significantly Expands Cloud at Customer with PaaS and SaaS Services to Help Customers in their Journey to the Cloud

Latest Oracle Database News - Wed, 07/19/2017 - 08:00
Press Release Oracle Significantly Expands Cloud at Customer with PaaS and SaaS Services to Help Customers in their Journey to the Cloud Delivers unrivaled enterprise-grade public cloud SaaS, PaaS, and IaaS services in customers’ datacenters

Redwood Shores, Calif.—Jul 19, 2017

Empowering organizations to move workloads to the cloud while keeping their data on their own premises, Oracle today announced significant expansion of the breadth of services available through Oracle Cloud at Customer. The portfolio now spans all of the major Oracle PaaS categories and for the first time, also features Oracle SaaS services. Since its introduction just over a year ago, Oracle Cloud at Customer has experienced unprecedented growth with leading global organizations across six continents and more than 30 countries adopting the solution, including AT&T and Bank of America.

Oracle Cloud at Customer is designed to enable organizations to remove one of the biggest obstacles to cloud adoption—data privacy concerns related to where the data is stored. While organizations are eager to move their enterprise workloads to the public cloud, many have been constrained by business, legislative and regulatory requirements that have prevented them from being able to adopt the technology. These first-of-a-kind services provide organizations with choice in where their data and applications reside and a natural path to easily move business critical applications eventually to the public cloud.

“Oracle Cloud at Customer is a direct response to the remaining barriers to cloud adoption and turning those obstacles into opportunities by letting customers choose the location of their cloud services,” said Thomas Kurian, president, product development, Oracle. “We are providing a unique service that enables our customers to leverage Oracle Cloud services, including SaaS, PaaS, and IaaS, both on their premises and in our cloud.  Customers gain all the benefits of Oracle’s robust cloud offerings, in their own datacenters, all managed and supported by Oracle.”

Underpinning Oracle Cloud at Customer is a modern cloud infrastructure platform based on converged Oracle hardware, software-defined storage and networking and a first class IaaS abstraction. Oracle fully manages and maintains the infrastructure at customers’ premises so that customers can focus on using the IaaS, PaaS and SaaS services. This is the same cloud infrastructure platform that powers the Oracle Cloud globally.

Based on overwhelming customer demand, Oracle continues to expand the services available via Oracle Cloud at Customer. With today’s news, customers now have access to all of Oracle’s major PaaS categories, including Database, Application Development, Analytics, Big Data, Application and Data Integration, and Identity Management. These services take advantage of specific enhancements that have been made to the underlying Oracle Cloud at Customer platform such as servers with faster CPUs and NVMe-based flash storage, as well as all-flash block storage to deliver even better performance for enterprise workloads.

For the first time, Oracle has also made available via Oracle Cloud at Customer, the ability to consume Oracle SaaS services such as Enterprise Resource Planning, Human Capital Management, Customer Relationship Management, and Supply Chain Management in their own datacenters. These best-in-class, modern applications help unlock business value and increase performance by enabling businesses and people to be more informed, connected, productive, and engaged. Major organizations are already adopting this new option to modernize their key enterprise operations and benefit from the speed of innovation in Oracle SaaS without having to move sensitive application data outside their premises. With the addition of SaaS services to Oracle Cloud at Customer, customers have access to Oracle Cloud services across the entire cloud stack, all delivered in a subscription-based, managed model, directly in their datacenters.

Also, newly available is the Oracle Big Data Cloud Machine, which is an optimized system delivering a production-grade Hadoop and Spark platform with the power of dedicated nodes and the flexibility and simplicity of a cloud offering. Organizations can now access a full range of Hadoop, Spark, and analytics tools on a simple subscription model in their own data centers.

Oracle Cloud at Customer delivers the following Oracle Cloud services:

  • Infrastructure: Provides elastic compute, containers, elastic block storage, object storage, virtual networking, and identity management to enable portability of Oracle and non-Oracle workloads into the cloud.
  • Data Management: Enables customers to use the number one database to manage data infrastructure in the cloud with the Oracle Database Cloud, including Oracle Database Exadata Cloud for extreme performance and Oracle MySQL Cloud.
  • Big Data and Analytics:  Empowers an entire organization to use a single platform to take advantage of any data to drive insights. Includes a broad set of big data cloud services, including Oracle Big Data Cloud Service, Oracle Analytics Cloud, and Oracle Event Hub Cloud.
  • Application Development: Enables organizations to develop and deploy Java applications in the cloud using Oracle Java Cloud, Oracle Application Container Cloud, Oracle Container Cloud, and Oracle WebCenter Portal Cloud.
  • Enterprise Integration: Simplifies integration of on-premises applications to cloud applications, as well as cloud application to cloud application integration using Oracle Integration Cloud, Oracle SOA Cloud, Oracle Data Integrator Cloud, Oracle GoldenGate Cloud, Oracle Managed File Transfer Cloud, and Oracle Internet of Things Cloud.
  • Security: Enables organizations to use Oracle Identity Cloud to implement and manage consistent identity and access management policies.
  • Software-as-a-Service: Provides organizations with a complete suite of software to run their businesses, including Oracle ERP Cloud, Oracle CX Cloud, Oracle HCM Cloud, and Oracle Supply Chain Management Cloud.

Customer Demand Drives Expansion of Portfolio

Global organizations are turning to Oracle Cloud at Customer to standardize on a platform to modernize existing infrastructure and develop innovative new applications. Customers including City of Las Vegas, Federacion Colombiana de Municipios, Glintt Healthcare, HCPA, NEC, NTT DATA, Rakuten Card, State University of New York, and State Bank of India are benefitting from Oracle Cloud services from inside their own datacenters.

“The City of Las Vegas is shifting its Oracle application workloads to the Oracle Cloud,” said Michael Sherwood, Director Information Technologies, city of Las Vegas. “By keeping the data in our data center, we retain full control while enabling innovation, gaining efficiencies and building applications to better serve our community.”

“Today, public organizations are constantly innovating to meet the needs of our citizens. For the Colombian Federation of Municipalities, we have decided to digitally transform our territories to become smart cities,” said Alejandro Murillo, CIO of the Colombian Federation of Municipalities. “With Oracle Cloud at Customer, we have the technological capabilities to bring top-level solutions in the cloud to our municipalities, enabling them to operate with more agility and better serve our citizens.”

“Oracle Cloud at Customer provides us with a consolidated solution to make sensitive healthcare data securely available,” said Nuno Vasco Lopes, CEO, Glintt Healthcare Solutions. “The efficient and flexible solution has reduced the total cost of ownership by 18 percent and delivered high customer performance.” 

Oracle Cloud at Customer

The Oracle Cloud at Customer portfolio of services enables organizations to get all of the benefits of Oracle’s public cloud services in their datacenters. The business model is just like a public cloud subscription; the hardware and software platform is the same; Oracle experts monitor and manage the infrastructure; and the same tools used in Oracle’s public cloud are used to provision resources on the Oracle Cloud at Customer services. This is the only offering from a major public cloud vendor that delivers a stack that is 100 percent compatible with the public cloud but available on-premises, ensuring that customers get the same experience and the latest innovations and benefits using it in their datacenters as in the public cloud. 

Additional Resources Contact Info Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

Follow Oracle Corporate

Categories: Latest Oracle News

Oracle Significantly Expands Cloud at Customer with PaaS and SaaS Services to Help Customers in their Journey to the Cloud

Latest Oracle Press Releases - Wed, 07/19/2017 - 08:00
Press Release Oracle Significantly Expands Cloud at Customer with PaaS and SaaS Services to Help Customers in their Journey to the Cloud Delivers unrivaled enterprise-grade public cloud SaaS, PaaS, and IaaS services in customers’ datacenters

Redwood Shores, Calif.—Jul 19, 2017

Empowering organizations to move workloads to the cloud while keeping their data on their own premises, Oracle today announced significant expansion of the breadth of services available through Oracle Cloud at Customer. The portfolio now spans all of the major Oracle PaaS categories and for the first time, also features Oracle SaaS services. Since its introduction just over a year ago, Oracle Cloud at Customer has experienced unprecedented growth with leading global organizations across six continents and more than 30 countries adopting the solution, including AT&T and Bank of America.

Oracle Cloud at Customer is designed to enable organizations to remove one of the biggest obstacles to cloud adoption—data privacy concerns related to where the data is stored. While organizations are eager to move their enterprise workloads to the public cloud, many have been constrained by business, legislative and regulatory requirements that have prevented them from being able to adopt the technology. These first-of-a-kind services provide organizations with choice in where their data and applications reside and a natural path to easily move business critical applications eventually to the public cloud.

“Oracle Cloud at Customer is a direct response to the remaining barriers to cloud adoption and turning those obstacles into opportunities by letting customers choose the location of their cloud services,” said Thomas Kurian, president, product development, Oracle. “We are providing a unique service that enables our customers to leverage Oracle Cloud services, including SaaS, PaaS, and IaaS, both on their premises and in our cloud.  Customers gain all the benefits of Oracle’s robust cloud offerings, in their own datacenters, all managed and supported by Oracle.”

Underpinning Oracle Cloud at Customer is a modern cloud infrastructure platform based on converged Oracle hardware, software-defined storage and networking and a first class IaaS abstraction. Oracle fully manages and maintains the infrastructure at customers’ premises so that customers can focus on using the IaaS, PaaS and SaaS services. This is the same cloud infrastructure platform that powers the Oracle Cloud globally.

Based on overwhelming customer demand, Oracle continues to expand the services available via Oracle Cloud at Customer. With today’s news, customers now have access to all of Oracle’s major PaaS categories, including Database, Application Development, Analytics, Big Data, Application and Data Integration, and Identity Management. These services take advantage of specific enhancements that have been made to the underlying Oracle Cloud at Customer platform such as servers with faster CPUs and NVMe-based flash storage, as well as all-flash block storage to deliver even better performance for enterprise workloads.

For the first time, Oracle has also made available via Oracle Cloud at Customer, the ability to consume Oracle SaaS services such as Enterprise Resource Planning, Human Capital Management, Customer Relationship Management, and Supply Chain Management in their own datacenters. These best-in-class, modern applications help unlock business value and increase performance by enabling businesses and people to be more informed, connected, productive, and engaged. Major organizations are already adopting this new option to modernize their key enterprise operations and benefit from the speed of innovation in Oracle SaaS without having to move sensitive application data outside their premises. With the addition of SaaS services to Oracle Cloud at Customer, customers have access to Oracle Cloud services across the entire cloud stack, all delivered in a subscription-based, managed model, directly in their datacenters.

Also, newly available is the Oracle Big Data Cloud Machine, which is an optimized system delivering a production-grade Hadoop and Spark platform with the power of dedicated nodes and the flexibility and simplicity of a cloud offering. Organizations can now access a full range of Hadoop, Spark, and analytics tools on a simple subscription model in their own data centers.

Oracle Cloud at Customer delivers the following Oracle Cloud services:

  • Infrastructure: Provides elastic compute, containers, elastic block storage, object storage, virtual networking, and identity management to enable portability of Oracle and non-Oracle workloads into the cloud.
  • Data Management: Enables customers to use the number one database to manage data infrastructure in the cloud with the Oracle Database Cloud, including Oracle Database Exadata Cloud for extreme performance and Oracle MySQL Cloud.
  • Big Data and Analytics:  Empowers an entire organization to use a single platform to take advantage of any data to drive insights. Includes a broad set of big data cloud services, including Oracle Big Data Cloud Service, Oracle Analytics Cloud, and Oracle Event Hub Cloud.
  • Application Development: Enables organizations to develop and deploy Java applications in the cloud using Oracle Java Cloud, Oracle Application Container Cloud, Oracle Container Cloud, and Oracle WebCenter Portal Cloud.
  • Enterprise Integration: Simplifies integration of on-premises applications to cloud applications, as well as cloud application to cloud application integration using Oracle Integration Cloud, Oracle SOA Cloud, Oracle Data Integrator Cloud, Oracle GoldenGate Cloud, Oracle Managed File Transfer Cloud, and Oracle Internet of Things Cloud.
  • Security: Enables organizations to use Oracle Identity Cloud to implement and manage consistent identity and access management policies.
  • Software-as-a-Service: Provides organizations with a complete suite of software to run their businesses, including Oracle ERP Cloud, Oracle CX Cloud, Oracle HCM Cloud, and Oracle Supply Chain Management Cloud.

Customer Demand Drives Expansion of Portfolio

Global organizations are turning to Oracle Cloud at Customer to standardize on a platform to modernize existing infrastructure and develop innovative new applications. Customers including City of Las Vegas, Federacion Colombiana de Municipios, Glintt Healthcare, HCPA, NEC, NTT DATA, Rakuten Card, State University of New York, and State Bank of India are benefitting from Oracle Cloud services from inside their own datacenters.

“The City of Las Vegas is shifting its Oracle application workloads to the Oracle Cloud,” said Michael Sherwood, Director Information Technologies, city of Las Vegas. “By keeping the data in our data center, we retain full control while enabling innovation, gaining efficiencies and building applications to better serve our community.”

“Today, public organizations are constantly innovating to meet the needs of our citizens. For the Colombian Federation of Municipalities, we have decided to digitally transform our territories to become smart cities,” said Alejandro Murillo, CIO of the Colombian Federation of Municipalities. “With Oracle Cloud at Customer, we have the technological capabilities to bring top-level solutions in the cloud to our municipalities, enabling them to operate with more agility and better serve our citizens.”

“Oracle Cloud at Customer provides us with a consolidated solution to make sensitive healthcare data securely available,” said Nuno Vasco Lopes, CEO, Glintt Healthcare Solutions. “The efficient and flexible solution has reduced the total cost of ownership by 18 percent and delivered high customer performance.” 

Oracle Cloud at Customer

The Oracle Cloud at Customer portfolio of services enables organizations to get all of the benefits of Oracle’s public cloud services in their datacenters. The business model is just like a public cloud subscription; the hardware and software platform is the same; Oracle experts monitor and manage the infrastructure; and the same tools used in Oracle’s public cloud are used to provision resources on the Oracle Cloud at Customer services. This is the only offering from a major public cloud vendor that delivers a stack that is 100 percent compatible with the public cloud but available on-premises, ensuring that customers get the same experience and the latest innovations and benefits using it in their datacenters as in the public cloud. 

Additional Resources Contact Info Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

Follow Oracle Corporate

Categories: Latest Oracle News

TGI Fridays Becomes First UK Bar to Implement Bar Tab from Mastercard and Oracle

Latest Oracle Database News - Wed, 07/19/2017 - 06:00
Press Release TGI Fridays Becomes First UK Bar to Implement Bar Tab from Mastercard and Oracle Innovation eliminates the need to leave a card behind the bar

London, U.K.—Jul 19, 2017

Today TGI Fridays, Oracle and Mastercard announced the launch of Bar Tab at their Leicester Square location. Bar Tab is a new function within Mastercard’s Qkr! payment app that allows consumers to set up, manage and pay bar tabs using their smartphones. The application will be integrated into Oracle Hospitality’s restaurant management platform and Masterpass, the digital payment service. After today’s initial debut, TGI Fridays plans to deploy the app to 80 additional locations in the UK by the end of 2017. 

Customers can easily manage their tab through a designated four-digit PIN that connects orders in the restaurant management software to the Qkr! payment account. This means customers can easily manage the rounds that they are in with friends and split the bill. There is no need to hand over a payment card to bar staff, and no need to use a card machine.

“We’re always looking for ways to improve the unique Fridays experience for our guests,” said Jeremy Dunderdale, Head of Business Solutions, TGI Fridays UK. “With Bar Tab, we’re able to offer our diners the freedom to settle their bills on-demand, with this quicker and more convenient payment platform. Enabling self-service payments also allows our team members to focus on engaging guests in more meaningful ways – which is what we’re all about at Fridays.”

Betty DeVita, Chief Commercial Officer for Mastercard Labs, said: “Nobody wants to hand over their card to a bartender. Your card should be with you at all times, so it’s natural for people to leave a pub without having closed their tab. This is a common problem we wanted to solve through Qkr. For bar staff themselves we have removed the headache of card storage and admin.”

“Oracle Hospitality’s restaurant management platform allows the food and beverage industry to innovate by creating a single view of operations,” said Dale Grant, Senior Vice President Food and Beverage Oracle Hospitality. “With Oracle Hospitality solutions at the core, restaurants and bars can easily integrate additional solutions like Mastercard’s Qkr payments platform to reinvent their customer experience. By implementing Bar Tab, TGI Fridays can now offer its customers a quicker, more convenient experience that empowers staff to provide more welcoming bar experiences while reducing the number of unpaid tabs at the end of the night.”

About Oracle Hospitality

Oracle Hospitality develops hardware and software solutions that work together to help produce tailored guest service for hotels and F&B establishments. By addressing every facet of the business, Oracle Hospitality solutions help optimize operations across the board to deliver the speed, agility and efficiency required to meet customers’ unique needs.

About Mastercard

Mastercard (NYSE: MA), www.mastercard.com, is a technology company in the global payments industry.  We operate the world’s fastest payments processing network, connecting consumers, financial institutions, merchants, governments and businesses in more than 210 countries and territories.  Mastercard products and solutions make everyday commerce activities – such as shopping, travelling, running a business and managing finances – easier, more secure and more efficient for everyone.  Follow us on Twitter @MastercardUKBiz, join the discussion on the Beyond the Transaction Blog and subscribe for the latest news on the Engagement Bureau.

About TGI Fridays

TGI Fridays offers authentic, contemporary, and full-flavoured American food, signature cocktails, and a lively, personalised experience.   With a continually evolving menu overseen by award-winning executive chef Terry McDowell, it’s the perfect stop for free-poured, personalised cocktails served by Fridays® Master Bartenders, a quick tasty bite, or a longer dinner with friends. Fridays® opened its first UK restaurant in Birmingham in March 1986. There are now 80 Fridays open in the UK.

For more information visit www.tgifridays.co.uk or www.fridays.com.  Like us on Facebook, follow us on Twitter, or visit our YouTube channel.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Follow Oracle Corporate

Categories: Latest Oracle News

NetSuite Supports BRCA Foundation with New Registry Program

Latest Oracle Database News - Wed, 07/19/2017 - 06:00
Press Release NetSuite Supports BRCA Foundation with New Registry Program Pro Bono Volunteers Build Out Sophisticated Enhancements to Meet Needs of Nonprofit Using the SuiteCloud Development Platform

San Mateo, Calif.—Jul 19, 2017

Oracle NetSuite Global Business Unit, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that it has teamed up with the BRCA Foundation, a nonprofit created to fund research for the prevention of “BRCA cancers,” or cancers believed to be caused by “broken” BRCA genes, to help launch a BRCA registry project and help the BRCA Foundation gather information relevant to its mission and organizational vision. A four-person team from NetSuite, together with the BRCA Foundation, created customizations in NetSuite to encourage participants to sign up for the program and ultimately pass voluntary contact and demographic data from a genomics testing partner to the BRCA Foundation, using the NetSuite SuiteCloud development platform. This data will be used to provide individuals with news and information about BRCA cancers, and allow them to connect with potential studies in which they may want to participate.

Established in 2016 by NetSuite Co-founder and NetSuite Global Business Unit Executive Vice President of Development, Evan Goldberg, the BRCA Foundation was created to accelerate research and foster collaboration to prevent and cure BRCA cancers. BRCA1 and BRCA2 are genes that produce proteins that help repair damaged DNA. For people who have a mutation in those genes, DNA damage may not repair properly and are more likely to develop additional alterations that can lead to cancer. A nonprofit and a NetSuite customer, BRCA applied for pro bono services from Oracle NetSuite Social Impact to help it establish a registry to gather and share data to provide researchers with potential participants they might use to conduct studies that will lead to better treatment and preventative options.

“NetSuite has been incredibly helpful and supportive of our mission every step of the way,” said Gail Fisher, Deputy Director of the BRCA. “It’s amazing what talented people can do with such a flexible platform. The registry is going to go a long way in the fight against cancer and have a huge impact on people with BRCA.”

As a result of the project, the BRCA Foundation now has a button on its website allowing volunteers to sign up for a genetic cancer screening test from its genomic testing partner. If volunteers choose, they can provide the BRCA Foundation with contact and demographic data to be maintained securely within NetSuite. That data can then be used in the fight against cancer, for example by enrolling participants in clinical trials if they wish.

“This project was so gratifying to be a part of,” said Jerome Wi, Solution Consulting Manager at NetSuite and Project Manager for Suite Pro Bono. “I got to lend my development skills to support a project that will ultimately help to fight against cancer, all while using the SuiteCloud development platform.”

Contact Info Christine Allen
Public Relations, Oracle NetSuite Global Business Unit
603-743-4534
PR@netsuite.com About SuiteCloud

NetSuite’s SuiteCloud is a comprehensive offering of cloud-based products, development tools and services designed to help customers and commercial software developers take advantage of the significant economic benefits of cloud computing. Based on NetSuite, the industry’s leading provider of cloud-based financials/ERP software suites, SuiteCloud enables customers to run their core business operations in the cloud, and software developers to target new markets quickly with newly-created mission-critical applications built on top of mature and proven business processes.

The SuiteCloud Developer Network (SDN) is a comprehensive developer program for independent software vendors (ISVs) who build apps for SuiteCloud. All available SuiteApps are listed on SuiteApp.com, a single-source online marketplace where NetSuite customers can find applications to meet specific business process or industry-specific needs. For more information on SuiteCloud and the SDN program, please visit www.netsuite.com/developers.

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials/Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 603-743-4534

Follow Oracle Corporate

Categories: Latest Oracle News

TGI Fridays Becomes First UK Bar to Implement Bar Tab from Mastercard and Oracle

Latest Oracle Press Releases - Wed, 07/19/2017 - 06:00
Press Release TGI Fridays Becomes First UK Bar to Implement Bar Tab from Mastercard and Oracle Innovation eliminates the need to leave a card behind the bar

London, U.K.—Jul 19, 2017

Today TGI Fridays, Oracle and Mastercard announced the launch of Bar Tab at their Leicester Square location. Bar Tab is a new function within Mastercard’s Qkr! payment app that allows consumers to set up, manage and pay bar tabs using their smartphones. The application will be integrated into Oracle Hospitality’s restaurant management platform and Masterpass, the digital payment service. After today’s initial debut, TGI Fridays plans to deploy the app to 80 additional locations in the UK by the end of 2017. 

Customers can easily manage their tab through a designated four-digit PIN that connects orders in the restaurant management software to the Qkr! payment account. This means customers can easily manage the rounds that they are in with friends and split the bill. There is no need to hand over a payment card to bar staff, and no need to use a card machine.

“We’re always looking for ways to improve the unique Fridays experience for our guests,” said Jeremy Dunderdale, Head of Business Solutions, TGI Fridays UK. “With Bar Tab, we’re able to offer our diners the freedom to settle their bills on-demand, with this quicker and more convenient payment platform. Enabling self-service payments also allows our team members to focus on engaging guests in more meaningful ways – which is what we’re all about at Fridays.”

Betty DeVita, Chief Commercial Officer for Mastercard Labs, said: “Nobody wants to hand over their card to a bartender. Your card should be with you at all times, so it’s natural for people to leave a pub without having closed their tab. This is a common problem we wanted to solve through Qkr. For bar staff themselves we have removed the headache of card storage and admin.”

“Oracle Hospitality’s restaurant management platform allows the food and beverage industry to innovate by creating a single view of operations,” said Dale Grant, Senior Vice President Food and Beverage Oracle Hospitality. “With Oracle Hospitality solutions at the core, restaurants and bars can easily integrate additional solutions like Mastercard’s Qkr payments platform to reinvent their customer experience. By implementing Bar Tab, TGI Fridays can now offer its customers a quicker, more convenient experience that empowers staff to provide more welcoming bar experiences while reducing the number of unpaid tabs at the end of the night.”

About Oracle Hospitality

Oracle Hospitality develops hardware and software solutions that work together to help produce tailored guest service for hotels and F&B establishments. By addressing every facet of the business, Oracle Hospitality solutions help optimize operations across the board to deliver the speed, agility and efficiency required to meet customers’ unique needs.

About Mastercard

Mastercard (NYSE: MA), www.mastercard.com, is a technology company in the global payments industry.  We operate the world’s fastest payments processing network, connecting consumers, financial institutions, merchants, governments and businesses in more than 210 countries and territories.  Mastercard products and solutions make everyday commerce activities – such as shopping, travelling, running a business and managing finances – easier, more secure and more efficient for everyone.  Follow us on Twitter @MastercardUKBiz, join the discussion on the Beyond the Transaction Blog and subscribe for the latest news on the Engagement Bureau.

About TGI Fridays

TGI Fridays offers authentic, contemporary, and full-flavoured American food, signature cocktails, and a lively, personalised experience.   With a continually evolving menu overseen by award-winning executive chef Terry McDowell, it’s the perfect stop for free-poured, personalised cocktails served by Fridays® Master Bartenders, a quick tasty bite, or a longer dinner with friends. Fridays® opened its first UK restaurant in Birmingham in March 1986. There are now 80 Fridays open in the UK.

For more information visit www.tgifridays.co.uk or www.fridays.com.  Like us on Facebook, follow us on Twitter, or visit our YouTube channel.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Follow Oracle Corporate

Categories: Latest Oracle News

NetSuite Supports BRCA Foundation with New Registry Program

Latest Oracle Press Releases - Wed, 07/19/2017 - 06:00
Press Release NetSuite Supports BRCA Foundation with New Registry Program Pro Bono Volunteers Build Out Sophisticated Enhancements to Meet Needs of Nonprofit Using the SuiteCloud Development Platform

San Mateo, Calif.—Jul 19, 2017

Oracle NetSuite Global Business Unit, one of the world’s leading providers of cloud-based financials / ERP, HR, Professional Services Automation (PSA) and omnichannel commerce software suites, today announced that it has teamed up with the BRCA Foundation, a nonprofit created to fund research for the prevention of “BRCA cancers,” or cancers believed to be caused by “broken” BRCA genes, to help launch a BRCA registry project and help the BRCA Foundation gather information relevant to its mission and organizational vision. A four-person team from NetSuite, together with the BRCA Foundation, created customizations in NetSuite to encourage participants to sign up for the program and ultimately pass voluntary contact and demographic data from a genomics testing partner to the BRCA Foundation, using the NetSuite SuiteCloud development platform. This data will be used to provide individuals with news and information about BRCA cancers, and allow them to connect with potential studies in which they may want to participate.

Established in 2016 by NetSuite Co-founder and NetSuite Global Business Unit Executive Vice President of Development, Evan Goldberg, the BRCA Foundation was created to accelerate research and foster collaboration to prevent and cure BRCA cancers. BRCA1 and BRCA2 are genes that produce proteins that help repair damaged DNA. For people who have a mutation in those genes, DNA damage may not repair properly and are more likely to develop additional alterations that can lead to cancer. A nonprofit and a NetSuite customer, BRCA applied for pro bono services from Oracle NetSuite Social Impact to help it establish a registry to gather and share data to provide researchers with potential participants they might use to conduct studies that will lead to better treatment and preventative options.

“NetSuite has been incredibly helpful and supportive of our mission every step of the way,” said Gail Fisher, Deputy Director of the BRCA. “It’s amazing what talented people can do with such a flexible platform. The registry is going to go a long way in the fight against cancer and have a huge impact on people with BRCA.”

As a result of the project, the BRCA Foundation now has a button on its website allowing volunteers to sign up for a genetic cancer screening test from its genomic testing partner. If volunteers choose, they can provide the BRCA Foundation with contact and demographic data to be maintained securely within NetSuite. That data can then be used in the fight against cancer, for example by enrolling participants in clinical trials if they wish.

“This project was so gratifying to be a part of,” said Jerome Wi, Solution Consulting Manager at NetSuite and Project Manager for Suite Pro Bono. “I got to lend my development skills to support a project that will ultimately help to fight against cancer, all while using the SuiteCloud development platform.”

Contact Info Christine Allen
Public Relations, Oracle NetSuite Global Business Unit
603-743-4534
PR@netsuite.com About SuiteCloud

NetSuite’s SuiteCloud is a comprehensive offering of cloud-based products, development tools and services designed to help customers and commercial software developers take advantage of the significant economic benefits of cloud computing. Based on NetSuite, the industry’s leading provider of cloud-based financials/ERP software suites, SuiteCloud enables customers to run their core business operations in the cloud, and software developers to target new markets quickly with newly-created mission-critical applications built on top of mature and proven business processes.

The SuiteCloud Developer Network (SDN) is a comprehensive developer program for independent software vendors (ISVs) who build apps for SuiteCloud. All available SuiteApps are listed on SuiteApp.com, a single-source online marketplace where NetSuite customers can find applications to meet specific business process or industry-specific needs. For more information on SuiteCloud and the SDN program, please visit www.netsuite.com/developers.

About Oracle NetSuite Global Business Unit

Oracle NetSuite Global Business Unit pioneered the Cloud Computing revolution in 1998, establishing the world’s first company dedicated to delivering business applications over the internet. Today, Oracle NetSuite Global Business Unit provides a suite of cloud-based financials/Enterprise Resource Planning (ERP), HR and omnichannel commerce software that runs the business of companies in more than 100 countries. For more information, please visit http://www.netsuite.com.

Follow Oracle NetSuite Global Business Unit’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • 603-743-4534

Follow Oracle Corporate

Categories: Latest Oracle News

Viewing Memory in PowerShell

Latest Microsoft Server Management News - Tue, 07/18/2017 - 18:18

Hello there, this is Benjamin Morgan, and I’m a Premier Field Engineer covering Active Directory and Platforms related topics.  This is my first blog post I hope you are all as excited about this as I am! Today I wanted to talk with you about a couple of quick ways for querying system memory (and provide some background as to *why* I was doing this).  Well without further ado let’s get started…

Pre-Amble

Recently I was working with a customer and they had an interesting problem. They were having an issue retrieving specific user attributes from Active Directory using PowerShell. While building the command in a lab the original syntax that I was given was: Get-ADUser -filter * -properties *

Well, as we all know, friends don’t let friends use ” -filter * -properties *” because it will return everything about everything, or in this case everything about all AD users, whereas best practice is to fine tune your scripts so you’re not wasting resources on information that is not needed and will never be used. Needless to say, the script would run for a little bit then bomb out.

My first step was to obviously change the “-filter *” to “-filter ‘name -like “*”‘ but leave the “-properties *” so we could identify the exact attributes that were needed. After this change the script ran a little longer but continuously bombed out. After running the script several times and it continuously failed at the same point, after retrieving user X, my first instinct was that maybe the user right after user X had a corrupted property so I needed to determine what user was next and then see what was going on with their account. I knew that I could do a “for-each” loop, but I didn’t want to take the time to do this since the customer was on a time crunch. I modified the “-properties” statement to only return the attributes that were needed and hoped for the best knowing that if all else failed I would do a “for-each” statement and get the information that way. I then changed the PowerShell command to ‘Get-ADUser -filter ‘name -like “*”‘ -properties PasswordNotRequired’ and that command worked. After looking at the user accounts full properties that were directly after the user that it failed on, and everything returned correctly I knew that there was something else going on, but they had the information they needed so all was good. So, the real question was what was going on and why was it happening.

Before we go on, its important to be aware that the recommendation for domain controllers is to have enough memory to store the dit file in memory, the OS requirements, and enough for any third party applications installed on the DC; in other words, the file size for the dit, plus ~15-20% for the OS (you can read more on this here: https://social.technet.microsoft.com/wiki/contents/articles/14355.capacity-planning-for-active-directory-domain-services.aspx). In this situation, these best practices weren’t followed, so the added burden of parsing all of their users caused the DC to essentially hang and did not allow PowerShell to return all the requested information when using wildcards.

Problem/Solution

I knew that since the AD user attributes were good, it had to be something simple, right? Next, I started looking at the domain controller performance. All I had was PowerShell access with no actual rights to do anything except what was given to me by the PowerShell credentials of the person logged into the computer. (This was a server Admin, not a Domain Admin account…) I decided that since they were a fairly large environment, I might want to check the resources on the domain controller I was connected to. This account had rights to log into the DC! That’s a security topic that is discussed in the PAW solution https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/privileged-access-workstations. I figured that the first resource to look at was RAM. A domain controller should have more memory instead of less, and I knew that PowerShell could pull everything but I wasn’t sure how to get everything that I needed. So off to Bing I went… After some research the best way to get the information was to leverage WMI. The issue I encountered was that WMI returns the results in bytes, which is useless to me. Luckily, PowerShell can do just about anything! It can natively convert bytes to GB, KB, MB, or anything else you may want. I am still trying to figure out how to have it make me coffee in the morning though.

I used the Get-WMIObject cmdlet because the customer was still using Windows 7 with PowerShell 2.0. If you are using PowerShell 3.0 or above then you can use Get-CimInstance which is the more modern way of retrieving this information. The Scripting Guy has a good blog comparing and contrasting Get-WMIObject and Get-CimInstance, https://blogs.technet.microsoft.com/heyscriptingguy/2016/02/08/should-i-use-cim-or-wmi-with-windows-powershell/. The scripts were also ran locally for the purpose of putting together this blog but the commands can be ran via the PowerShell remoting ability of your choice. The following link explains the different PowerShell remoting options, https://technet.microsoft.com/en-us/library/gg981683.aspx.

The first command is Get-WMIObject win32_ComputerSystem which returns an output like the following

Or you can use Get-CimInstance win32_ComputerSystem | format-list which returns an output like the following

So TotalPhysicalMemory doesn’t do a lot of good unless you want to do the conversion to GB yourself but as I said PowerShell is smart enough to be able to do that for me.

According to https://technet.microsoft.com/en-us/library/ee692684.aspx to do the conversion the command is followed by “/ 1GB” so the command would be Get-WMIObject win32_ComputerSystem | foreach {$_.TotalPhysicalMemory /1GB}, but while this command will return the amount of memory in GB it will be in decimal notation and will look like

Get-CimInstance win32_ComputerSystem | foreach {$_.TotalPhysicalMemory /1GB}

To truncate the decimal places and only show whole numbers you have to call the .NET framework’s System.Math class ([math]). That command would be,

Get-WMIObject win32_ComputerSystem | foreach {[math]::truncate($_.TotalPhysicalMemory /1GB)}

Get-CimInstance win32_ComputerSystem | foreach {[math]::truncate($_.TotalPhysicalMemory /1GB)}

While this command returns showing my system only has 3GB, in reality it has 4GB, this is due to truncating the amount of memory that is shown to remove all decimal points and only show whole numbers.

In order to see the exact amount of memory you want to round the TotalPhysicalMemory to the nearest whole number. The command to do this is,

Get-WMIObject win32_ComputerSystem | foreach {[math]::round($_.TotalPhysicalMemory /1GB)}

Get-CimInstance win32_ComputerSystem | foreach {[math]::round($_.TotalPhysicalMemory /1GB)}

Hopefully this helps you hunt down some of those pesky memory issues, and thanks for reading!

Categories: Latest Microsoft News

Copying Files into a Hyper-V VM with Vagrant

Latest Microsoft Server Management News - Tue, 07/18/2017 - 14:50

A couple of weeks ago, I published a blog with tips and tricks for getting started with Vagrant on Hyper-V. My fifth tip was to “Enable Nifty Hyper-V Features,” where I briefly mentioned stuff like differencing disks and virtualization extensions.

While those are useful, I realized later that I should have added one more feature to my list of examples: the “guest_service_interface” field in “vm_integration_services.” It’s hard to know what that means just from the name, so I usually call it the “the thing that lets me copy files into a VM.”

Disclaimer: this is not a replacement for Vagrant’s synced folders. Those are super convienent, and should really be your default solution for sharing files. This method is more useful in one-off situations.

Enabling Copy-VMFile

Enabling this functionality requires a simple change to your Vagrantfile. You need to set “guest_service_interface” to true within “vm_integration_services” configuration hash. Here’s what my Vagrantfile looks like for CentOS 7:

# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provider "hyperv" config.vm.network "public_network" config.vm.synced_folder ".", "/vagrant", disabled: true config.vm.provider "hyperv" do |h| h.enable_virtualization_extensions = true h.differencing_disk = true h.vm_integration_services = { guest_service_interface: true #<---------- this line enables Copy-VMFile } end end

You can check that it’s enabled by running Get-VMIntegrationService in PowerShell on the host machine:

PS C:vagrant_selfhostcentos> Get-VMIntegrationService -VMName "centos-7-1-1.x86_64" VMName Name Enabled PrimaryStatusDescription SecondaryStatusDescription ------ ---- ------- ------------------------ -------------------------- centos-7-1-1.x86_64 Guest Service Interface True OK centos-7-1-1.x86_64 Heartbeat True OK centos-7-1-1.x86_64 Key-Value Pair Exchange True OK The protocol version of... centos-7-1-1.x86_64 Shutdown True OK centos-7-1-1.x86_64 Time Synchronization True OK The protocol version of... centos-7-1-1.x86_64 VSS True OK The protocol version of...

Note: not all integration services work on all guest operating systems. For example, this functionality will not work on the “Precise” Ubuntu image that’s used in Vagrant’s “Getting Started” guide. The full compatibility list various Windows and Linux distrobutions can be found here. Just click on your chosen distrobution and check for “File copy from host to guest.”

Using Copy-VMFile

Once you’ve got a VM set up correctly, copying files to and from arbitrary locations is as simple as running Copy-VMFile in PowerShell.

Here’s a sample test I used to verify it was working on my CentOS VM:

Copy-VMFile -Name 'centos-7-1-1.x86_64' -SourcePath '.Foo.txt' -DestinationPath '/tmp' -FileSource Host

Full details can found in the official documentation. Unfortunately, you can’t yet use it to copy files from your VM to your host. If you’re running a Windows Guest, you can use Copy-Item with PowerShell Direct to make that work; see this document for more details.

How Does It Work?

The way this works is by running Hyper-V integration services within the guest operating system. Full details can be found in the official documentation. The short version is that integration services are Windows Services (on Windows) or Daemons (on Linux) that allow the guest operating system to communicate with the host. In this particular instance, the integration service allows us to copy files to the VM over the VM Bus (no network required!).

Conclusion

Hope you find this helpful — let me know if there’s anything you think I missed.

John Slack
Program Manager
Hyper-V Team

Categories: Latest Microsoft News

Securely store API keys in R scripts with the &#8220;secret&#8221; package

Latest Microsoft Data Platform News - Tue, 07/18/2017 - 11:40

If you use an API key to access a secure service, or need to use a password to access a protected database, you'll need to provide these "secrets" in your R code somewhere. That's easy to do if you just include those keys as strings in your code — but it's not very secure. This means your private keys and passwords are stored in plain-text on your hard drive, and if you email your script they're available to anyone who can intercept that email. It's also really easy to inadvertently include those keys in a public repo if you use Github or similar code-sharing services.

To address this problem, Gábor Csárdi and Andrie de Vries created the secret package for R. The secret package integrates with OpenSSH, providing R functions that allow you to create a vault to keys on your local machine, define trusted users who can access those keys, and then include encrypted keys in R scripts or packages that can only be decrypted by you or by people you trust. You can see how it works in the vignette secret: Share Sensitive Information in R Packages, and in this presentation by Andrie de Vries at useR!2017:

 

To use the secret package, you'll need access to your private key, which you'll also need to store securely. For that, you might also want to take a look at the in-progress keyring package, which allows you to access secrets stored in Keychain on macOS, Credential Store on Windows, and the Secret Service API on Linux.

The secret package is available now on CRAN, and you can also find the latest development version on Github.

 

Categories: Latest Microsoft News

Neural Networks from Scratch, in R

Latest Microsoft Data Platform News - Tue, 07/18/2017 - 09:30

By Ilia Karmanov, Data Scientist at Microsoft

This post is for those of you with a statistics/econometrics background but not necessarily a machine-learning one and for those of you who want some guidance in building a neural-network from scratch in R to better understand how everything fits (and how it doesn't).

Andrej Karpathy wrote that when CS231n (Deep Learning at Stanford) was offered:

"we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards".

Why bother with backpropagation when all frameworks do it for you automatically and there are more interesting deep-learning problems to consider?

Nowadays we can literally train a full neural-network (on a GPU) in 5 lines.

import keras model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=RMSprop()) model.fit()

Karpathy, abstracts away from the "intellectual curiosity" or "you might want to improve on the core algorithm later" argument. His argument is that the calculations are a leaky abstraction:

“it is easy to fall into the trap of abstracting away the learning process — believing that you can simply stack arbitrary layers together and backprop will 'magically make them work' on your data”

Hence, my motivation for this post is two-fold:

  1. Understanding (by writing from scratch) the leaky abstractions behind neural-networks dramatically shifted my focus to elements whose importance I initially overlooked. If my model is not learning I have a better idea of what to address rather than blindly wasting time switching optimisers (or even frameworks).

  2. A deep-neural-network (DNN), once taken apart into lego blocks, is no longer a black-box that is inaccessible to other disciplines outside of AI. It's a combination of many topics that are very familiar to most people with a basic knowledge of statistics. I believe they need to cover very little (just the glue that holds the blocks together) to get an insight into a whole new realm.

Starting from a linear regression we will work through the maths and the code all the way to a deep-neural-network (DNN) in the accompanying R-notebooks. Hopefully to show that very little is actually new information.

Step 1 - Linear Regression (See Notebook)


Implementing the closed-form solution for the Ordinary Least Squares estimator in R requires just a few lines:

# Matrix of explanatory variables X

The vector of values in the variable beta_hat define our "machine-learning model". A linear regression is used to predict a continuous variable (e.g. how many minutes will this plane be delayed by). In the case of predicting a category (e.g. will this plane be delayed - yes/no) we want our prediction to fall between 0 and 1 so that we can interpret it as the probability of observing the respective category (given the data).

When we have just two mutually-exclusive outcomes we would use a binomial logistic regression. With more than two outcomes (or "classes"), which are mutually-exclusive (e.g. this plane will be delayed by less than 5 minutes, 5-10 minutes, or more than 10 minutes), we would use a multinomial logistic regression (or "softmax"). In the case of many (n) classes that are not mutually-exclusive (e.g. this post references "R" and "neural-networks" and "statistics"), we can fit n-binomial logistic regressions.

An alternative approach to the closed-form solution we found above is to use an iterative method, called Gradient Descent (GD). The procedure may look like so:

  • Start with a random guess for the weights
  • Plug guess into loss function
  • Move guess in the opposite direction of the gradient at that point by a small amount (something we call the `learning-rate')
  • Repeat above for N steps

GD only uses the Jacobian matrix (not the Hessian), however we know that when we have a convex loss, all local minima are global minima and thus GD is guaranteed to converge to the global minimum.

The loss-function used for a linear-regression is the Mean Squared Error:

begin{equation*} C = frac{1}{2n}sum_x(y(x) - a(x))^2 end{equation*}

To use GD we only need to find the partial derivative of this with respect to beta_hat (the 'delta'/gradient).

This can be implemented in R, like so:

# Start with a random guess beta_hat

Running this for 200 iterations gets us to same gradient and coefficient as the closed-form solution. Aside from being a stepping stone to a neural-network (where we use GD), this iterative method can be useful in practice when the the closed-form solution cannot be calculated because the matrix is too big to invert (to fit into memory).

Step 2 - Logistic Regression (See Notebook)


A logistic regression is a linear regression for binary classification problems. The two main differences to a standard linear regression are:

  1. We use an 'activation'/link function called the logistic-sigmoid to squash the output to a probability bounded by 0 and 1
  2. Instead of minimising the quadratic loss we minimise the negative log-likelihood of the Bernoulli distribution

Everything else remains the same.

We can calcuate our activation function like so:

sigmoid

We can create our log-likelihood function in R:

log_likelihood

This loss function (the logistic loss or the log-loss) is also called the cross-entropy loss. The cross-entropy loss is basically a measure of 'surprise' and will be the foundation for all the following models, so it is worth examining a bit more.

If we simply constructed the least-squares loss like before, because we now have a non-linear activation function (the sigmoid), the loss will no longer be convex which will make optimisation hard.

begin{equation*} C = frac{1}{2n}sum_x(y(x) - a(x))^2 end{equation*}

We could construct our own loss function for the two classes. When (y=1), we want our loss function to be very high if our prediction is close to 0, and very low when it is close to 1. When (y=0), we want our loss function to be very high if our prediction is close to 1, and very low when it is close to 0. This leads us to the following loss function:

begin{equation*} C = -frac{1}{n}sum_xy(x)ln(a(x)) + (1 - y(x))ln(1-a(x)) end{equation*}

The delta for this loss function is pretty much the same as the one we had earlier for a linear-regression. The only difference is that we apply our sigmoid function to the prediction. This means that the GD function for a logistic regression will also look very similar:

logistic_reg Step 3 - Softmax Regression (No Notebook)


A generalisation of the logistic regression is the multinomial logistic regression (also called 'softmax'), which is used when there are more than two classes to predict. I haven't created this example in R, because the neural-network in the next step can reduce to something similar, however for completeness I wanted to highlight the main differences if you wanted to create it.

First, instead of using the sigmoid function to squash our (one) value between 0 and 1:

begin{equation*} sigma(z)=frac{1}{1+e^{-z}} end{equation*}

We use the softmax function to squash the sum of our (n) values (for (n) classes) to 1:

begin{equation*} phi(z)=frac{e^{z_j}}{sum_ke^{z_k}} end{equation*}

This means the value supplied for each class can be interpreted as the probability of that class, given the evidence. This also means that when we see the target class and increase the weights to increase the probability of observing it, the probability of the other classes will fall. The implicit assumption is that our classes are mutually exclusive.

Second, we use a more general version of the cross-entropy loss function:

begin{equation*} C = -frac{1}{n}sum_xsum_j y_jln(a_j) end{equation*}

To see why, remember that for binary classifications (previous example) we had two classes: (j=2), under the condition that the categories are mutually-exclusive (sum_ja_j=1) and that (y) is one-hot so that (y1+y2=1), we can re-write the general formula as:

begin{equation*} C = -frac{1}{n}sum_xy_1ln(a_1) + (1 - y_1)ln(1-a_1) end{equation*}

Which is the same equation we first started with. However, now we relax the constraint that (j=2). It can be shown that the cross-entropy loss here has the same gradient as for the case of the binary/two-class cross-entropy on logistic outputs.

begin{equation*} frac{partial C}{partial beta_i} = frac{1}{n}sum_xx_i(a(x) - y) end{equation*}

However, although the gradient has the same formula it will be different because the activation here takes on a different value (softmax instead of logistic-sigmoid).

In most deep-learning frameworks you have the choice of 'binary-crossentropy' or 'categorical-crossentropy' loss. Depending on whether your last layer contains sigmoid or softmax activation you would want to choose binary or categorical cross-entropy (respectively). The training of the network should not be affected, since the gradient is the same, however the reported loss (for evaluation) would be wrong if these are mixed up.

The motivation to go through softmax is that most neural-networks will use a softmax layer as the final/'read-out' layer, with a multinomial/categorical cross-entropy loss instead of using sigmoids with a binary cross-entropy loss — when the categories are mutually exclusive. Although multiple sigmoids for multiple classes can also be used (and will be used in the next example), this is generally only used for the case of non-mutually-exclusive labels (i.e. we can have multiple labels). With a softmax output, since the sum of the outputs is constrained to equal 1, we have the advantage of interpreting the outputs as class probabilities.

Step 4 - Neural Network (See Notebook)


A neural network can be thought of as a series of logistic regressions stacked on top of each other. This means we could say that a logistic regression is a neural-network (with sigmoid activations) with no hidden-layer.

This hidden-layer lets a neural-network generate non-linearities and leads to the Universal approximation theorem, which states that a network with just one hidden layer can approximate any linear or non-linear function. The number of hidden-layers can go into the hundreds.

It can be useful to think of a neural-network as a combination of two things: 1) many logistic regressions stacked on top of each other that are 'feature-generators' and 2) one read-out-layer which is just a softmax regression. The recent successes in deep-learning can arguable be attributed to the 'feature-generators'. For example; previously with computer vision, we had to painfully state that we wanted to find triangles, circles, colours, and in what combination (similar to how economists decide which interaction-terms they need in a linear regression). Now, the hidden-layers are basically an optimisation to decide which features (which 'interaction-terms') to extract. A lot of deep-learning (transfer learning) is actually done by generating features using a trained-model with the head (read-out layer) cut-off, and then training a logistic regression (or boosted decision-trees) using those features as inputs.

The hidden-layer also means that our loss function is not convex in parameters and we can't roll down a smooth-hill to get to the bottom. Instead of using Gradient Descent (which we did for the case of a logistic-regression) we will use Stochastic Gradient Descent (SGD), which basically shuffles the observations (random/stochastic) and updates the gradient after each mini-batch (generally much less than total number of observations) has been propagated through the network. There are many alternatives to SGD that Sebastian Ruder does a great job of summarising here. I think this is a fascinating topic to go through, but outside the scope of this blog-post. Briefly, however, the vast majority of the optimisation methods are first-order (including SGD, Adam, RMSprop, and Adagrad) because calculating the second-order is too computionally difficult. However, some of these first-order methods have a fixed learning-rate (SGD) and some have an adaptive learning-rate (Adam), which means that the 'amount' we update our weights by becomes a function of the loss - we may make big jumps in the beginning but then take smaller steps as we get closer to the target.

It should be clear, however that minimising the loss on training data is not the main goal - in theory we want to minimise the loss on 'unseen'/test data; hence all the opimisation methods proxy for that under the assumption that a low lost on training data will generalise to 'new' data from the same distribution. This means we may prefer a neural-network with a higher training-loss; because it has a lower validation-loss (on data it hasn't been trained on) - we would typically say that the network has 'overfit' in this case. There have been some recent papers that claim that adaptive optimisation methods do not generalise as well as SGD because they find very sharp minima points.

Previously we only had to back-propagate the gradient one layer, now we also have to back-propagate it through all the hidden-layers. Explaining the back-propagation algorithm is beyond the scope of this post, however it is crucial to understand. Many good resources exist online to help.

We can now create a neural-network from scratch in R using four functions.

First, we initialise our weights:

neuralnetwork

Since we now have a complex combination of parameters we can't just initialise them to be 1 or 0, like before - the network may get stuck. To help, we use the gaussian distribution (however, just like with the opimisation, there are many other methods):

Second, we use stochastic gradient descent as our optimisation method:

Third, as part of the SGD method, we update the weights after each mini-batch has been forward and backwards-propagated:

Fourth, the algorithm we use to calculate the deltas is the back-propagation algorithm.

In this example we use the cross-entropy loss function, which produces the following gradient:

cost_delta

Also, to be consistent with our logistic regression example we use the sigmoid activation for the hidden layers and for the read-out layer:

# Calculate activation function sigmoid

As mentioned previously, usually the softmax activation is used for the read-out layer. For the hidden layers, ReLU is more common, which is just the max function (negative weights get flattened to 0). The activation function for the hidden layers can be imagined as a race to carry a baton/flame (gradient) without it dying. The sigmoid function flattens out at 0 and at 1, resulting in a flat gradient which is equivalent to the flame dying out (we have lost our signal). The ReLU function helps preserve this gradient.

The back-propagation function is defined as:

backprop

Check out the notebook for the full code — however the principle remains the same: we have a forward-pass where we generate our prediction by propagating the weights through all the layers of the network. We then plug this into the cost gradient and update the weights through all of our layers.

This concludes the creation of a neural network (with as many hidden layers as you desire). It can be a good exercise to replace the hidden-layer activation with ReLU and read-out to be softmax, and also add L1 and L2 regularization. Running this on the iris dataset in the notebook (which contains 4 explanatory variables with 3 possible outomes), with just one hidden-layer containing 40 neurons we get an accuracy of 96% after 30 rounds/epochs of training.

The notebook also runs a 100-neuron handwriting-recognition example to predict the digit corresponding to a 28x28 pixel image.

Step 5 - Convolutional Neural Network (See Notebook)


Here, we will briefly examine only the forward-propagation in a convolutional neural-network (CNN). CNNs were first made popular in 1998 by LeCun's seminal paper. Since then, they have proven to be the best method we have for recognising patterns in images, sounds, videos, and even text!

Image recognition was initially a manual process; researchers would have to specify which bits (features) of an image were useful to identify. For example, if we wanted to classify an image into ‘cat’ or ‘basketball’ we could have created code that extracts colours (basketballs are orange) and shapes (cats have triangular ears). Perhaps with a count of these features we could then run a linear regression to get the relationship between number of triangles and whether the image is a cat or a tree. This approach suffers from issues of image scale, angle, quality and light. Scale Invariant Feature Transformation (SIFT) largely improved upon this and was used to provide a `feature description' of an object, which could then be fed into a linear regression (or any other relationship learner). However, this approach had set-in-stone rules that could not be optimally altered for a specific domain.

CNNs look at images (extract features) in an interesting way. To start, they look only at very small parts of an image (at a time), perhaps through a restricted window of 5 by 5 pixels (a filter). 2D convolutions are used for images, and these slide the window across until the whole image has been covered. This stage would typically extract colours and edges. However, the next layer of the network would look at a combination of the previous filters and thus 'zoom-out'. After a certain number of layers the network would be 'zoomed-out' enough to recognise shapes and larger structures.

These filters end up as the 'features' that the network has learned to identify. It can then pretty much count the presence of each feature to identify a relationship with the image label ('basketball' or 'cat'). This approach appears quite natural for images — since they can broken down into small parts that describe it (colours, textures, etc.). CNNs appear to thrive on the fractal-like nature of images. This also means they may not be a great fit for other forms of data such as an excel worksheet where there is no inherent structure: we can change the column order and the data remains the same — try swapping pixels in an image (the image changes)!

In the previous example we looked at a standard neural-net classifying handwritten text. In that network each neuron from layer (i), was connected to each neuron at layer (j) — our 'window' was the whole image. This means if we learn what the digit '2' looks like; we may not recognise it when it is written upside down by mistake, because we have only seen it upright. CNNs have the advantage of looking at small bits of the digit '2' and finding patterns between patterns between patterns. This means that a lot of the features it extracts may be immune to rotation, skew, etc. For more detail, Brandon Rohrer explains here what a CNN actually is in detail.

We can define a 2D convolution function in R:

convolution

And use it to a apply a 3x3 filter to an image:

conv_emboss

You can check the notebook to see the result, however this seems to extract the edges from a picture. Other, convolutions can 'sharpen' an image, like this 3x3 filter:

conv_sharpen

Typically we would randomly initialise a number of filters (e.g. 64):

filter_map

We can visualise this map with the following function:


Running this function we notice how computationally intensive the process is (compared to a standard fully-connected layer). If these feature maps are not useful 'features' (i.e. the loss is difficult to decrease when these are used) then back-propagation will mean we will get different weights which correspond to different feature-maps; which will become more useful to make the classification.

Typically we stack convolutions on top of other convolutions (and hence the need for a deep network) so that edges becomes shapes and shapes become noses and noses become faces. It can be interesting to examine some feature maps from trained networks to see what the network has actually learnt.

Download Notebooks

You can find notebooks implementing the code behind this post on Github by following the links in the section headings, or as Azure Notebooks at the link below:

Azure Notebooks: NeuralNetR

Categories: Latest Microsoft News

Datacenter efficiency gets easier with new Windows Server Software Defined partner solutions

Latest Microsoft Server Management News - Tue, 07/18/2017 - 09:00

The island of Bora Bora. The finish line at a marathon. A software defined datacenter. What do they have in common? Being there is easy getting there is the hard part. However in the latter case, at least, you can let someone else do the hard part for you.

Theres so much efficiency, simplicity, and cost-savings to be gained by moving to a software-defined infrastructure, but many IT organizations lack the resources to design and implement it themselves. If that sounds like your organization, check out the solutions provided by the partners in our Windows Server Software-Defined (WSSD) program, including DataON, Fujitsu, HPE, Lenovo, QCT, and Supermicro. This growing lineup of partners offers an array of validated WSSD solutions that work with Windows Server 2016 to deliver the benefits of software-defined infrastructure.

With WSSD validated solutions, you can tap into similar technologies used to run hyper-scale datacenters such as Microsoft Azure. Azure runs on Windows Server, and the Datacenter edition of Windows Server 2016 includes many of the same technologies that Microsoft uses to support Azure. These new capabilities are built into the OS, so you wont need to buy any additional software. In addition, youll realize significant price/performance results by taking advantage of cutting-edge devices such as NVMe drives, RDMA NICs, and NVDIMMs at price points that are much better than traditional external storage devices.

To earn certification by the Windows Server Software-Defined program, partners must meet Microsoft standards for quality, accelerated time to value, out-of-the-box optimization, and expedited problem resolution. Our validation and certification is one of the most rigorous in the industry. Each component is certified and the end-to-end solution is validated using Microsofts test harness. Deployments use prescriptive, automated tooling that cuts deployment time from days or weeks to mere hours. Youll be up and running by the time the WSSD partner leaves your site, with a single point of contact for support.

Partners offer three kinds of WSSD solutions:

  • Hyper-Converged Infrastructure (HCI) Standard: Highly virtualized compute and storage are combined in the same server-node cluster, making them easier to deploy, manage, and scale.
  • Hyper-Converged Infrastructure (HCI) Premium: Comprehensive software-defined datacenter in a box adds software-defined networking and Security Assurance features to HCI Standard. This makes it easy to scale compute, storage, and networking up and down to meet demand just like public cloud services.
  • Software-Defined Storage (SDS): Built on server-node clusters, this enterprise-grade, shared-storage solution replaces traditional external storage device at a much lower cost while support for all-flash NVMe drives delivers unrivaled performance. You can quickly add storage capacity as your needs grow over time.

Take the next step find your WSSD partner today.

And a few words from our partners

DataON is very proud to be a Microsoft partner, exclusively focused on Microsoft server-based storage solutions, said Howard Lo, Vice President, Sales & Marketing, DataON. We combine our hyper-converged platform and exclusive MUST visibility and management tool for Windows Server SDS to deliver a WSSD-validated solution to give our customers the greatest confidence in their choice to deploy a Windows Server software-defined infrastructure.

“We are excited to share QCTs Windows Server 2016 certified servers and our QxStack Microsoft Cloud Ready Solutions, with the worlds leading businesses and institutions so they can enhance their software-defined compute, storage, networking, virtualization, flexibility, and infrastructure scalability, said Mike Yang, President of QCT. The complementing solutions allow us to fulfill the continuing needs of our mutual customers by delivering faster-time-to-value, flexible and innovative cloud-ready solutions, and to do so in a low-risk and cost effective manner.

Supermicro has partnered with Microsoft to bring to market a Hyper-Converged Infrastructure (HCI) premium solution based on our industry leading NVMe systems, said Henry Kung, VP of server integration, Supermicro. We worked closely with the Windows Server team to certify Windows Server Storage Spaces Direct using our highest performing 1U all-NVMe platform. This HCI premium solution is a cost-effective software defined data center (SDDC) in a 4-node solution with all the benefits of Microsoft software-defined storage and networking. This HCI premium solution is certified and ready to be deployed in large volume in enterprise and cloud-scale datacenters.

Categories: Latest Microsoft News

New validated Windows Server Software Defined solutions from our partners

Latest Microsoft Server Management News - Tue, 07/18/2017 - 09:00

We are pleased to announce a new set of validated software-defined datacenter solutions are now available from our Windows Server partners, including DataON, Fujitsu, HPE, Lenovo, QCT, and Supermicro. These hyper-converged solutions make it faster and easier to deploy software-defined compute, storage, and networking in your datacenter. In addition to providing validated hardware solutions that meet the Microsoft reference architecture, these partners offer deployment services and one-stop technical support.

Partners offer three kinds of WSSD solutions:

  • Hyper-Converged Infrastructure (HCI) Standard: Highly virtualized compute and storage are combined in the same server-node cluster, making them easier to deploy, manage, and scale.
  • Hyper-Converged Infrastructure (HCI) Premium: Comprehensive “software-defined datacenter in a box” adds software-defined networking and security features to HCI Standard.
  • Software-Defined Storage (SDS): Built on server-node clusters, this enterprise-grade, shared-storage solution replaces traditional external storage device at a much lower cost while support for all-flash NVMe drives delivers unrivaled performance.

Learn more by reading the Hybrid Cloud blog post Datacenter efficiency gets easier with new Windows Server Software Defined partner solutions.

Categories: Latest Microsoft News

Microsoft Drivers v4.3.0 for PHP for SQL Server released!

Latest Microsoft Data Platform News - Tue, 07/18/2017 - 09:00

This post was authored by Meet Bhagdev, Program Manager, Microsoft

Hi all,

We are excited to announce the Production Ready release for the Microsoft Drivers v4.3.0 for PHP for SQL Server. The drivers now support Debian Jessie and macOS. The driver enables access to SQL Server, Azure SQL Database, and Azure SQL DW from any PHP application on Linux, Windows, and macOS.

Notable items for the release:

Added Fixed
  • Fixed the assertion error (Linux) when fetching data from a binary column using the binary encoding (issue #226)
  • Fixed PECL installation errors when PHP was installed from source (issue #213)
  • Fixed issue output parameters bound to empty string (issue #182)
  • Fixed a memory leak in closing connection resources
  • Fixed load ordering issue in MacOS (issue #417)
  • Fixed the issue with driver loading order in macOS
  • Fixed null returned when an empty string is set to an output parameter (issue #308)
  • SQLSRV only
    • Fixed sqlsrv client buffer size to only allow positive integers (issue #228)
    • Fixed sqlsrv_num_rows() when the client buffered result is null (issue #330)
    • Fixed issues with sqlsrv_has_rows() to prevent it from moving statement cursor (issue #37)
    • Fixed conversion warnings because of some const chars (issue #332)
    • Fixed debug abort error when building the driver in debug mode with PHP 7.1
    • Fixed string truncation when binding varchar(max), nvarchar(max), varbinary(max), and xml types (issue #231)
    • Fixed fatal error when fetching empty nvarchar (issue #69)
    • Fixed fatal error when calling sqlsrv_fetch() with an out of bound offset for SQLSRV_SCROLL_ABSOLUTE (issue #223)
  • PDO_SQLSRV only
    • Fixed issue with SQLSRV_ATTR_FETCHES_NUMERIC_TYPE when column return type is set on statement (issue #173)
    • Improved performance by implementing a cache to store column SQL types and display sizes (issue #189)
    • Fixed segmentation fault with PDOStatement::getColumnMeta() when the supplied column index is out of range (issue #224)
    • Fixed issue with the unsupported attribute PDO::ATTR_PERSISTENT in connection (issue #65)
    • Fixed the issue with executing DELETE operation on a nonexistent value (issue #336)
    • Fixed incorrectly binding of unicode parameter when emulate prepare is on and the encoding is set at the statement level (issue #92)
    • Fixed binary column binding when emulate prepare is on (issue #140)
    • Fixed wrong value returned when fetching varbinary value on Linux (issue #270)
    • Fixed binary data not returned when the column is bound by name (issue #35)
    • Fixed exception thrown on closeCursor() when the statement has not been executed (issue #267)
Limitations
  • No support for input/output parameters when using sql_variant type
Known issue
  • When pooling is enabled in Linux or macOS
    • unixODBC <= 2.3.4 (Linux and macOS) might not return proper diagnostics information, such as error messages, warnings, and informative messages
    • Because of this unixODBC bug, fetch large data (such as xml, binary) as streams as a workaround. See the examples here.
Survey

Let us know how we are doing and how you use our driver by taking our pulse survey: https://www.surveymonkey.com/r/CZNSBYW.

Get started Getting Drivers for PHP5 and older runtimes

You can download the Microsoft PHP Drivers for SQL Server for PHP 5.4, 5.5, and 5.6 from the download center: https://www.microsoft.com/en-us/download/details.aspx?id=20098. Version 3.0 supports PHP 5.4, version 3.1 supports PHP 5.4 and PHP 5.5, and version 3.2 supports PHP 5.4, 5.5, and 5.6.

PHP Driver Version Supported v3.2 PHP 5.6, 5.5, 5.4 v3.1 PHP 5.5, 5.4 v3.1 PHP 5.4

Meet Bhagdev (meetb@microsoft.com)

Categories: Latest Microsoft News

Managed Services

If you do not understand your computer systems or your organization has computer maintenance needs, contact us and we'll schedule a free consultation to offload the computer headaches from you.

View details »

Database Administration

Need some assistance with a database or database maintenance? We can help with that and anything else related to databases.

View details »

Virtualization

Not sure how this could help you? Let us explain the benefits of virtualization and be sure you are using your hardware properly.

View details »