Feed aggregator

Hurricane Irma’s rains, visualized with R

The USGS has followed up their visualization of Hurricane Harvey rainfalls with an updated version of the animation, this time showing the rain and flooding from Hurricane Irma in Florida:

Another #rstats #dataviz! Precip and #flooding from #HurricaneIrma 💧 #opensource code: https://t.co/rpocPQe7zR #openscience pic.twitter.com/rGX1SNiYEM

— USGS R Community (@USGS_R) September 15, 2017

This iteration improves the Harvey version by displaying rainfall in a fine grid over the state rather than on a county-by-county basis. Once again you can find the R code and data on Github, and an interactive version of the chart is available at the link below.

USGS Vizlab: Hurricane Irma's Water Footprint

Categories: Latest Microsoft News

Recap: Applications of R at EARL London 2017

Latest Microsoft Data Platform News - Mon, 09/18/2017 - 16:17

The fourth EARL London conference took place last week, and once again it was an enjoyable and informative showcase of practical applications of R. Kudos to the team from Mango for hosting a great event featuring interesting talks and a friendly crowd. 

As always, there were more talks on offer than I was able to attend (most of the event was in three parallel tracks), but here are a few highlights that I was able to catch:

Jenny Bryan from RStudio gave an outstanding keynote: "Workflows: You Should Have One". The talk was a really useful collection of good practices for data analysis with R. Note I didn't say "best practices" there, and that's something that resonated with me from Jenny's talk: sometimes the best practices are the ones that are "good enough" and don't add unnecessary complexity or fragility. Software Carpentry's "Good Enough Practices for Scientific Computing"is a great read that I hadn't come across before this talk.

Rachel Kirkham from the National Audit Office uses R and Shiny to scrutinize public spending on behalf of taxpayers in the UK. (View the slides here.) One interesting beneficial aspect of R for this application is the ability to bring R to the data, thereby sidestepping regulations that forbid the data being exported from the NAO data center.

Joy McKenney from Northumbrian Water gave a truly fascinating talk on using R to monitor and predict flows in a sewer system. (View the slides here.) One surprising application from this talk was being able to model whether an overflowing sewer line is due to rainfall or because of a blockage in the system — a tool that can be used to detect "fatbergs" clogging up the system.

Pieter Vos from Philips Research showed how they deploy applications to doctors to evaluate things like cancer risk. (View the slides here.) The applications call out to R to make the underlying statistical calculations.

Joe Cheng from RStudio gave a remarkable talk on a major concept for the R language itself: "promises". (View the slides here.) With promises, you can ask for the results of a long-running computation, but get back to the R command line instantaneously. The results will be available to you, well, when they're ready. It sounds like magic, but it's already working (in beta) with the promises package and promises (pun intended) to bring more responsiveness to Shiny applications.

Ashley Turner from Transport for London gave a talk that I was very sad I couldn't see (it conflicted with my own session), but fortunately the slides are available. R has been used for over 2 years to model how Londoners get about via car, bus and Tube. It was used for official report on London transportation, which included some fascinating data on the routes certain commuters prefer for getting from (say) Kings Cross to Waterloo. 

Matthew Upson from the Government Digital Service reported on the ongoing program to implement a reproducible data science workflow for creating official reports. (View the slides here.) He says that this process has reduced production time for some official reports by 75%, and you can learn more about the program here.

I was also informed by the organizers that my own talk on Reproducible Data Science with R (slides here) won the "award" for loudest presentation at the conference. I think the audio desk had the volume up a little too high for this video

As I said, there are many more excellent talks beyond those listed above. You can explore the other talks by clicking here and then clicking through each speaker portrait — slides, where available, are linked at the bottom of each abstract.

The next EARL conference will take place in Boston, November 1-3 and promises to include even more practical applications of R. I hope to see you there!

Categories: Latest Microsoft News

Marching into the future of the Azure AD admin experience: retiring the Azure classic portal

Latest Microsoft Server Management News - Mon, 09/18/2017 - 09:00

Howdy folks,

Since we announced General Availability of the new Azure AD admin center in May, it’s been used by over 800,000 users from 500,000 organizations in almost every country in the world. The new admin center is the future for administration of Azure AD.

For over a year, we’ve been listening to your feedback and working to improve the new portal and the new experience. And we’ve heard you loud and clear that we have too many portals, that you want a single place where you can manage identity and access for your organization. So, on November 30, we’ll be retiring the Azure AD admin experience in the classic Azure portal.

Moving all admin capabilities to the new admin center and retiring our classic portal experience is a key milestone in our efforts to simplify the admin experience for Azure AD.

Azure AD admin center: the present and future for Azure AD administration

Now, the Azure AD admin center is where you can go to find admin experiences for the latest and greatest Azure AD capabilities. By focusing on the Azure AD admin center, we can make our admin experiences more consistent, and easier to use. And we can deliver them faster.

At the moment, there are a few tasks that can still only be done in the classic Azure portal. Don’t worry, these capabilities will be added to our new admin experience in the next few weeks, well before November 30.

Azure Information Protection and Access Control Service

The Azure Information Protection (or AIP, formerly Rights Management Service) admin experiences will also be retired in the Azure classic portal on November 30, but can be found here in the new Azure portal.

To learn more about Azure Information Protection, read our documentation. To share feedback about Azure Information Protection, send an email to MSIPAppFeedback.

Additionally, after November 30, admin experiences for Access Control Services will be available at a different URL. We’ll communicate the details of that change soon.

Wrapping up

We hope you love using the Azure AD admin center! If you have questions about using or administering Azure AD, reach out to our engineering team and our community in our forum. And if you’ve got specific feedback on our admin portal experience, like bug reports or feature requests, post them in the ‘Admin portal’ section of our feedback forum.

Thanks for your continued feedback! It’s what guides us as we work to make the admin experience the best it can be for you. Keep sharing your thoughts we’re always listening.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Categories: Latest Microsoft News

Active Directory Access Control List – Attacks and Defense

Latest Microsoft Server Management News - Mon, 09/18/2017 - 09:00

Recently there has been a lot of attention and a few different blog posts (references at the end of the post) regarding the use of Discretionary Access Control List (DACL) for privilege escalation in a Domain environment. This potential attack vector involves the creation of an escalation path based in AD object permissions (DACLs). For example, gaining Reset Password permissions on a privileged account is one possible way to compromise it by DACLs path.

Although DACL permissions are not the easiest topic to cover in one post and should be digested slowly, there are examples of potential attack scenarios we want to share. The following blog tries to shed some light on the subject, present the possible escalation paths and suggest relevant mitigations.

Abstract

  • Active Directory Access Control
  • Protected accounts and groups
  • Delegated permissions
  • ACLs-Only escalation path
  • Hybrid path
  • Takeaways

Microsoft Windows environment implements access control by assigning security descriptors to objects stored in Active Directory. A Security Descriptor is a set of information attached to every object and contains four security components. In this blog, we will focus on the object creator (which user owns the object) and the Discretionary Access Control List (DACL – which users and groups are allowed or denied access) components. The two others components are the SACL, which defines which users and groups access should be audited and the inheritance settings of access control information.

A DACL is a list of access control entries (ACE). Each ACE represents a security identifier (SID) which specifies the access rights allowed or denied for that SID. When an access request is performed to an object, the system checks the ACEs in a sequence until it finds one or more ACEs that match the SIDs in the requestors token, and either denies or allows the requested access.

Figure 1 The DACL (discretionary access control list) applied on an OU named New York. By default, permissions of Active Directory objects are controlled by the built-in security accounts (Users & Groups) and they populate most, if not all, of the Active Directory objects DACL. Those ACEs are usually inherited from the domain object itself (e.g. DC=atalab,DC=local).

Figure 2 – Access permissions of “Authenticated Users” applied on “user1” account object (e.g. CN=user1,DC=atalab,DC=local)

Among the domain accounts, the most attractive ones are the privileged built-in accounts, e.g Domain Admins, Enterprise Admins, etc. These privileged built-in accounts are protected by a mechanism, Security Descriptor propagation or SDProp in short, that enforces a secured ACL on the account objects in Active Directory. This enforcement is done in order to prevent unauthorized modifications to these privileged accounts.

Protected Accounts and Groups

Most of the privileged built-in accounts are considered protected accounts and groups. Each of the protected accounts object permissions are set and enforced via an automatic process. This process, named SDProp, ensures the permissions on the object remain secured.

Figure 3 – Default ACE’s in AdminSDHolder object – Windows 2012 R2

SDProp runs by default every sixty minutes. In every run, the permissions on the protected accounts are reset to match those of the AdminSDHolder container, located under the system container in the domain partition. The process applies its task recursively on all members of groups and disables inheritance on all protected accounts.

SDProp significantly reduces the potential attack vector against the privileged built-in accounts. However, there could be scenarios exposing the DACLs on non-built-in privileged accounts leading to potential escalation. Lets have a look at some of these scenarios.

Scenario 1: Delegated permissions

Active Directory allows an administrator to delegate permissions to regular domain accounts, e.g. user, group, computer, without adding the account to an administrative group. Commonly delegated permissions include Reset Password on user accounts, usually granted to helpdesk personnel, and the ability to add New Member to a group, often granted to the groups business owners. In addition, the owner of an object can modify permissions and delegate specific permissions to other users, for example, updating the title of a user account delegated to an HR service account.

When talking about theoffensive use of ACLs, adversaries may try to escalate privileges by abusing the delegated permissions and obtain access to accounts which have specific permissions on other AD objects.

Examples:

  • A low-privileged account who has “Modify Group Membership” permissions on a help-desk group, may add any account to that group and effectively gain the same permission as the privileged group.
  • A helpdesk group which has the “Reset Password” permissions on an organization unit (OU) in the domain, will effectively gain full object control on the OU and all its child objects, as the OU permissions are inherited by its child objects (excluding objects protected by the SDProp process).

When the examples above are combined, they form a path. Attackers may use the path available after compromising the first account to gain permissions on multiple other objects. This ACL path will allow the attackers to escalate privileges to a privileged account which is not part of the accounts list that is protected by the SDProp mechanism.

However, an attacker wont be able to escalate privileges to privileged built-in accounts, as the ACLs on these accounts are protected by the SDProp mechanism.

Scenario 2: Changing the default ACL on the AdminSDHolder

Why would you change AdminSDHolder manually?To date, our team hasn’t found a solid reason.

You should be careful with changing the AdminSDHolder. The Exchange faced several issues with that (in a release candidate) which were all fixed for RTM as the granted permissions on the AdminSDHolder enabled an elevation of privilege scenario that is unacceptable in any environment. More information can be found in the documentation,Appendix C: Protected Accounts and Groups in Active Directory.

If you have a good reason to change AdminSDHolder manually, please leave it in the comments below.

However, if you did change the DACL on the AdminSDHolder and added permissions to an additional user or group, you probably extended the domains attack surface. Thats because, with the new permission, there is an additional user or group that can manage the privileged built-in accounts. These accounts, which were granted permissions on the AdminSDHHolder, can be compromised, and often are less protected and monitored than the privileged built-in accounts. Once compromised, they can be used as the initial user in the ACL path to the privileged built-in account, e.g. a domain admin.

Rohan Vazarkar, Will Schroeder, and Andy Robbinsdid a great job with brining ACLs to the front, literally, with the announcement of BloodHound 1.3. Though the example in their blog post on how to plan an ACL-only attack path, ending as a domain admin may prove difficult in practice, due to the AdminSDHolder and SDProp, in case the permissions werent modified.

The hybrid path

However, the ACL attack path could be combined with other lateral movement scenarios. For example, an ACL attack-path could be used to compromise a helpdesk user, which in turn, can be used to connect to a server where a Domain Admin user is logged-in, then compromising the DA account.

Both attack surfaces in the hybrid path can be visualized in a graph and allow Blue Teams as well as Red Teams to map their Domain environments. BloodHound 1.3 is an open-source tool which uses a PowerShell script to collect the required data for creating the graph and graph theory to find potential attack paths. If you find a path with no obstacles, it probably leads somewhere.

ATA can detect multiple reconnaissance methods, including the ones used by BloodHound, to detect advanced attacks happening in your network.

Additional Resources

Advanced Threat Analytics is part of the Microsoft Enterprise Mobility + Security Suite (E3) or the Microsoft Enterprise CAL Suite (ECAL). Start a trial or deploy it now by downloading an Advanced Threat Analytics 90-day evaluation.

Ask questions and join the discussion with our team on the Microsoft Advanced Threat Analytics Tech Community site!

Categories: Latest Microsoft News

Securing Privileged Access for the AD Admin – Part 2

Latest Microsoft Server Management News - Mon, 09/18/2017 - 07:38

Hello everyone, my name is still David Loder, and I’m still PFE out of Detroit, Michigan. Hopefully you’ve read Securing Privileged Access for the AD Admin – Part 1. If not, go ahead. We’ll wait for you. Now that you’ve started implementing the roadmap, and you’re reading this with your normal user account (which no longer has Domain Admin rights), we’ll continue the journey to a more secure environment. Recall the overarching goal is to create an environment that minimizes tier-0 and in doing so establishes a clear tier-0 boundary. This requires understanding the tier-0 equivalencies that currently exist in the environment and either planning to keep them in tier-0 or move them out to a different tier.

Privileged Access Workstations (PAWs) for AD Admins

You’ve (hopefully) gone through the small effort to have a credential whose only purpose is to manage AD. Let’s assume you now need to go do some actual administering. The only implementation that prevents expansion of your tier-0 equivalencies would be to physically enter your data center and directly log on to the console of a Domain Controller. But that’s not very practical for any number of obvious reasons and I think everyone would agree that an AD Admin being able to perform their admin tasks remotely from a DC console is a huge productivity gain. Therefore, you now need a workstation.

I’m going to guess that most of you use the one workstation that was handed out by your IT department. That workstation which uses the same base image for every employee in the organization. That workstation which is designed to be managed by your IT department for ease of support. Yes, that workstation.

Recall last time we spent almost all our time talking about tier-0 equivalencies. Guess what? I’m going to sound like a broken record. Item #3 from our elevator speech in part one stated “Anywhere that tier-0 credentials are used is a tier-0 system.” What is the new system we just added to tier-0? That workstation. Now, any process that has administrative control over that workstation is a tier-0 equivalency. Consider patching, anti-virus, inventory and event log consolidation. Is each of those running as local system on your workstation and managed by a service external to the laptop? Check, check, check and check. Does it have Helpdesk personnel as local admins? Check. I’ll ask again how big is your tier-0?

I hear some of you starting to argue ‘I don’t actually log on to my workstation with my AD admin credential, I use [X].’ What if you use RunAs? That workstation is still a tier-0 system. What if you use it to RDP into a jump-box? That workstation is still a tier-0 system. What if you have smartcard logons? Still a tier-0 system. Some of the supplemental material goes into the details of the various logon types, but the simple concept is ‘secure the keyboard.’ Whatever keyboard you’re using to perform tier-0 administration is a tier-0 system.

Now that we’ve established that your workstation really is a tier-0 system, let’s treat it as such. Start acting like your workstation is a portable Domain Controller. Think of all those plans, procedures and systems you have in place to manage the DCs. You need to start using them to manage your workstation. My fellow PFE Jerry Devore has an in-depth look at creating a PAW to be your admin workstation.

Should your PAW be a separate piece of hardware? Preferably, yes. That way it is only online when it needs to be used, helping to reduce the expansion of tier-0 to the minimum necessary. If your organization can’t afford separate hardware you can virtualize on one piece of hardware. But the virtualization needs to occur in the opposite direction than you might ordinarily expect. The PAW image will still need to run as the host image, and your corporate desktop would be virtualized inside. This keeps any compromise of your unprivileged desktop from elevating access into your PAW.

This is another big step/small step decision. PAWs will be a change for your organization. If you can start small by implementing it for a few AD Admins, you can show your enterprise that using PAWs can be a sustainable model. At later phases in the roadmap you can expand PAWs to more users.

With a PAW in place you now have a tier-0 workstation for your tier-0 credential to manage your tier-0 asset. Congratulations, by implementing the first two steps down the SPA roadmap, you now have the beginnings of a true tier-0 boundary.

Unique Local Admin Passwords for Workstations

So far, we’ve been talking about protecting your personal AD Admin accounts. But everyone knows AD has its own built-in Administrator account that is shared across all DCs. Ensure you have some process in place to manage that specific “break in case of fire” account. Maybe two Domain Admins each manage half of the password, and those halves are securely stored. The point is: have a procedure for managing this one account. Be careful if you decide to implement an external system to manage that password. Do you want that external system to become tier-0 just to manage one AD Admin account? I can’t answer that question for you, but I can point out that it is a tier-0 boundary decision. Your new PAWs, on the other hand, will have one built-in Administrator account per PAW. How do we practically secure those multiple Administrator accounts without increasing the size of tier-0?

The answer is to implement Microsoft’s Local Administrator Password Solution (LAPS). Simply put, LAPS is a new Group Policy Client Side Extension (CSE), available for you to deploy at no additional cost. It will automatically randomize the local Administrator account on your tier-0 PAWs on an ongoing basis, store that password in AD and allow you to securely manage its release to authorized personnel (which should only be the tier-0 admins). Since the PAW and AD are both already tier-0 systems, using one to manage the other does not increase the size of tier-0. That fits our goal of minimizing the size of tier-0.

These new PAWs that you just introduced into the environment also become the perfect place to begin a pilot deployment of LAPS. Install the CSE on the PAWs, create a separate OU to hold the PAW computer objects, create the LAPS GPO and link it to the PAW OU. You’ll never have to worry about the local admin password on your PAW again. As another big step/small step decision, using LAPS to manage the new PAWs should be an easier step than starting out using LAPS for all your workstations.

If you’re interested in how LAPS allows us to help combat Pass the Hash attacks, here are a few additional
resources you can review.

Unique Local Admin Password for Servers

Building on your previous work of where you want your tier-0 boundary to be, start running LAPS on those member servers that are going to remain part of tier-0. Again, a smaller step than LAPS everywhere, and not much else to say on the subject. By this point you should be familiar with LAPS and are just expanding its usage.

End of the Stage 1 and the Roads Ahead

If you expand LAPS to cover all workstations and all servers, congratulations, you have now followed the roadmap to the end of Stage 1.

Stage 2 and Stage 3 of the roadmap involves expanding the use of the PAWs to all administrators, implementing advanced credential management that begins to move you away from password-only credentials, minimizing the amount of standing, always-on, admin access, implementing the tier-0 boundary you already decided upon, and increasing your ability to detect attacks against AD. You can also start looking at implementing Windows Server 2016 and taking advantage of some of our newest security features.

In these stages, we’re looking at implementing new capabilities that defend against more persistent attackers. As such, these will take longer to implement than Stage 1. But if you’ve already gotten people familiar with the tiering model and talking about your tier-0 boundary you’ll have an easier time implementing this guidance, with less resistance, as all the implementations are aligned to the singular goal of minimizing your tier-0 surface area.

2.1. PAW Phases 2 and 3: all admins and additional hardening

Get a PAW into the hands of everyone with admin rights to separate their Internet-using personal productivity end user account from their admin credentials. Even if they’re still crossing tiers at this point in time, there is now some separation from the most common compromise channel.

2.2. Time-bound privileges (no permanent administrators)

If an account has no admin rights, is it still an admin credential? The least vulnerable administrators are those with admin access to nothing. We provide tooling in current versions of both AD and Microsoft Identity Manager to deliver this functionality.

2.3. Multi-factor for time-bound elevation

Passwords are no longer a sufficient authentication mechanism for administrative access. Having to breach a secondary channel significantly increases the attackers’ costs.

Also have a look at some of our current password guidance.

2.4. Just Enough Admin (JEA) for DC Maintenance

Allowing junior or delegated Admins to perform approved tasks, instead of having to make them full admins, further reduces the tier-0 surface area. You can even consider delegating access to yourself for common actions you perform all the time, fully eliminating work tasks that require the use of a tier-0 credential.

2.5. Lower attack surface of Domain and DCs

This is where all the up-front work of understanding and defining your tier boundaries pays off in spades. When you reach this step, no one should be surprised about what you intend to do. If you’ve decided to keep tier-0 small and are isolating the security infrastructure management from the general Enterprise management, everyone has already agreed to that. If you’ve decided that you must keep some of those systems as tier-0, you’ve hardened them like they are DCs and have elevated the maturity of those admins to treat their credentials like the tier-0 assets they are.

2.6. Attack Detection

Seeing Advanced Threat Analytics (ATA) in action, and providing visibility into exactly what your DCs are doing, will likely be an eye-opening revelation for most environments. Consider this your purpose-built Identity SIEM instead of simply being a dumping ground for events in general.

And, while not officially on the roadmap at this time, if you have SCOM, take a look at the great work some of our engineers have put into the Security Monitoring Management Pack.

3.1. Modernize Roles and Delegation Model

This goes together with lowering the attack surface of the Domain and DCs. You can’t accomplish that reduction without providing alternate roles and delegations that don’t require tier-0 credentials. You should be trying to scope tier-0 AD admin activity to actions like patching the OS and promoting new DCs. If someone isn’t performing a task along those lines, they likely are not tier-0 admins and should instead be delegated rights to perform the activity and not be Domain Admin.

3.2. Smartcard or Passport Authentication for all admins

More of the same advice that you need to start eliminating passwords from your admins.

3.3. Admin Forest for Active Directory administrators

I’m sure your AD environment is perfectly managed. All the legacy protocols have been disabled, you have control over every account (human or service) that has admin rights on any DC. In essence, you’ve already been doing everything is the roadmap.

No?

Your environment doesn’t look like that?

Sometimes it’s easier to admit that it’s going to be too difficult to regain administrative control over the current Enterprise forest. Instead, you can implement a new, pristine environment right out of the box and shift your administrative control to this forest. Your current Enterprise forest is left mostly alone due to all the app-compat concerns that go along with everything that’s been tied to AD. We have lots of guidance and implementation services to help make sure you build this new forest right and ensure it’s only used for administration purposes. That way you can turn on all the new security features to protect your admins without fear of breaking the old app running in some forgotten closet.

3.4. Code Integrity Policy for DCs (Server 2016)

Your DCs should be your most controlled, purpose-built servers in your environment. Creating a policy that locks them down to exactly what you intended helps keep tier-0 from expanding as your DCs can’t just start running new code that isn’t already part of their manifest.

3.5. Shielded VMs for virtual DCs (Server 2016 Hyper-V Fabric)

I remember the first time I saw a VM POST and realized what a game-changer virtualization was going to be. Unfortunately, it also made walking out the door with a fully running DC as easy as copy/paste. With Shielded VMs you can now enforce boundaries between your Virtualization Admins and your AD Admins. You can allow your virtualization services to operate at tier-1 while being able to security host tier-0 assets without violating the integrity of the tier boundary. Can you say “Game changer”?

Don’t Neglect the Other Tiers

While this series focused on tier-0, the methodology of tackling the problem extends to the other tiers as well. This exercise was fundamentally about segmentation of administrative control. What we’ve seen, is that over the years, unintentional administrative control gets granted and then becomes an avenue for attack. Be especially on the lookout for service accounts that are local admin on lots of systems and understand how those credentials are used and if they are present on those endpoints in a manner that allows them to be reused for lateral movement. If you’ve gone through the effort to secure tier-0 but you have vulnerable credentials with standing admin access to all of tier-1, where your business-critical data is stored, you probably haven’t moved the needle as much as you need to. Ideally you get to the point where the compromise of a single workstation or a single server is contained to that system and doesn’t escalate into a compromise of most of the environment.

I know this has been a lot of guidance over these two posts. Even if you can’t do everything, I know you can do something to improve your environment. Hopefully I provided some new insight into how you can make your environment more secure than it is currently and exposed you to the volumes of guidance in the SPA roadmap. Now get out there and start figuring out where your tier-0 boundary is and where you want it to be!

Thanks for spending a little bit of your time with me.

-Dave

#proudmicrosoftemployee

Categories: Latest Microsoft News

Oracle’s New SPARC Systems Deliver 2-7x Better Performance, Security Capabilities, and Efficiency than Intel-based Systems

Latest Oracle Database News - Mon, 09/18/2017 - 05:00
Press Release Oracle’s New SPARC Systems Deliver 2-7x Better Performance, Security Capabilities, and Efficiency than Intel-based Systems World’s most advanced processor adds breakthrough performance and security enhancements with Software in Silicon v2 for Oracle Cloud, Oracle Engineered Systems, and Servers

Redwood Shores, Calif.—Sep 18, 2017

Oracle today announced its eighth generation SPARC platform, delivering new levels of security capabilities, performance, and availability for critical customer workloads. Powered by the new SPARC M8 microprocessor, new Oracle systems and IaaS deliver a modern enterprise platform, including proven Software in Silicon with new v2 advancements, enabling customers to cost-effectively deploy their most critical business applications and scale-out application environments with extreme performance both on-premises and in Oracle Cloud.

SPARC M8 processor-based systems, including the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry’s first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API’s for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

“Oracle has long been a pioneer in engineering software and hardware together to secure high-performance infrastructure for any workload of any size,” said Edward Screven, chief corporate architect, Oracle. “SPARC was already the fastest, most secure processor in the world for running Oracle Database and Java. SPARC M8 extends that lead even further.”

The SPARC M8 processor offers security enhancements delivering 2x faster encryption and 2x faster hashing than x86 and 2x faster than SPARC M7 microprocessors. The SPARC M8 processor’s unique design also provides always-on security by default and built-in protection of in-memory data structures from hacks and programming errors. SPARC M8’s silicon innovation provides new levels of performance and efficiency across all workloads, including:  

  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors.  DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.
 

Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Contact Info Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com Kristin Reeves
Blanc and Otus
+1.925.787.6744
kristin.reeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.925.787.6744

Follow Oracle Corporate

Categories: Latest Oracle News

Oracle’s New SPARC Systems Deliver 2-7x Better Performance, Security Capabilities, and Efficiency than Intel-based Systems

Latest Oracle Press Releases - Mon, 09/18/2017 - 05:00
Press Release Oracle’s New SPARC Systems Deliver 2-7x Better Performance, Security Capabilities, and Efficiency than Intel-based Systems World’s most advanced processor adds breakthrough performance and security enhancements with Software in Silicon v2 for Oracle Cloud, Oracle Engineered Systems, and Servers

Redwood Shores, Calif.—Sep 18, 2017

Oracle today announced its eighth generation SPARC platform, delivering new levels of security capabilities, performance, and availability for critical customer workloads. Powered by the new SPARC M8 microprocessor, new Oracle systems and IaaS deliver a modern enterprise platform, including proven Software in Silicon with new v2 advancements, enabling customers to cost-effectively deploy their most critical business applications and scale-out application environments with extreme performance both on-premises and in Oracle Cloud.

SPARC M8 processor-based systems, including the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry’s first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API’s for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

“Oracle has long been a pioneer in engineering software and hardware together to secure high-performance infrastructure for any workload of any size,” said Edward Screven, chief corporate architect, Oracle. “SPARC was already the fastest, most secure processor in the world for running Oracle Database and Java. SPARC M8 extends that lead even further.”

The SPARC M8 processor offers security enhancements delivering 2x faster encryption and 2x faster hashing than x86 and 2x faster than SPARC M7 microprocessors. The SPARC M8 processor’s unique design also provides always-on security by default and built-in protection of in-memory data structures from hacks and programming errors. SPARC M8’s silicon innovation provides new levels of performance and efficiency across all workloads, including:  

  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors.  DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.
 

Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Contact Info Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com Kristin Reeves
Blanc and Otus
+1.925.787.6744
kristin.reeves@blancandotus.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Kristin Reeves

  • +1.925.787.6744

Follow Oracle Corporate

Categories: Latest Oracle News

Recently Published KB articles and Support Content 9-15-2017

Latest Microsoft Server Management News - Fri, 09/15/2017 - 15:30

We have recently published or updated the following support content for Configuration Manager.

How-To or Troubleshooting

10082 Troubleshooting PXE boot issues in Configuration Manager

  • Online Troubleshooting Guide that helps administrators diagnose and resolve PXE boot failures in System Center 2012 Configuration Manager (ConfigMgr 2012 or ConfigMgr 2012 R2) and later versions. Read More https://support.microsoft.com/help/10082.

4040243 How to enable TLS 1.2 for Configuration Manager

  • This article describes how to enable TLS 1.2 for Microsoft System Center Configuration Manager. This description includes individual components, update requirements for commonly-used Configuration manager features, and high-level troubleshooting information for common problems.  Read More https://support.microsoft.com/help/4040243/.
Issue Resolution

4037828 Summary of changes in System Center Configuration Manager current branch, version 1706

  • Release version 1706 of System Center Configuration Manager Current Branch contains many changes to help you avoid issues and many feature improvements. The “Issues that are fixed” list is not inclusive of all changes. Instead, it highlights the changes that the product development team believes are the most relevant to the broad customer base for Configuration Manager. Read More https://support.microsoft.com/help/4037828.

4036267 Update 2 for System Center Configuration Manager version 1706, first wave

  • An update is available to administrators who opted in through a PowerShell script to the first wave (early update ring) deployment for System Center Configuration Manager current branch, version 1706. You can access the update in the Updates and Servicing node of the Configuration Manager console. This update addresses important late-breaking issues that were resolved after version 1706 became available globally. Read more https://support.microsoft.com/help/4036267.

4039380 Update for System Center Configuration Manager version 1706, first wave

  • This update address important issues in the first wave (early update ring) deployment for Microsoft System Center Configuration Manager current branch, version 1706.This update is no longer available and has been replaced by update KB 4036267. Read more https://support.microsoft.com/help/4039380.

4041012 1702 clients do not get software updates from Configuration Manager

  • After installing Configuration Manager version 1702, newly installed clients are unable to get updates from the Software Update Point. This can also occur if the Software Update Point is moved to a different server after installation of version 1702.  Read More https://support.microsoft.com/help/4041012.

4019125 FIX: System Center Configuration Manager replication process by using BCP APIs fails when there is a large value in an XML column. Read More https://support.microsoft.com/help/4019125.

4038659 Existing computer records are not updated when new information is imported in System Center Configuration Manager version 1702

  • When new information for an existing computer is imported, either through the Configuration Manager console or the ImportMachineEntry method, a new record is created for that computer. This causes changes to the existing collection membership, discovery properties, and task sequence variables for that computer. Read More https://support.microsoft.com/help/4038659.
Categories: Latest Microsoft News

Because it’s Friday: Rapid Unscheduled Disassembly

Latest Microsoft Data Platform News - Fri, 09/15/2017 - 12:30

SpaceX has done some amazing work proving the concept of commercial spaceflight services. But that's not to say there haven't been a few bumps along the way, as this "blooper reel" (set to Monty Python music shows).

(If now's not a good time for video, Ars Technica has selected some choice stills for you.)

That's all from us at the blog for this week. We'll be back on Monday — see you then!

Categories: Latest Microsoft News

Microsoft R Open 3.4.1 now available

Latest Microsoft Data Platform News - Fri, 09/15/2017 - 03:29

Microsoft R Open (MRO), Microsoft's enhanced distribution of open source R, has been upgraded to version 3.4.1 and is now available for download for Windows, Mac, and Linux. This update upgrades the R language engine to R 3.4.1 and updates the bundled packages

MRO is 100% compatible with all R packages. MRO 3.4.1 points to a fixed CRAN snapshot from September 1 2017, and you can see some highlights of new packages released since MRO 3.4.0 on the Spotlights page. As always you can use the built-in checkpoint package to access packages from an earlier date (for compatibility) or a later date (to access new and updated packages).  

MRO 3.4.1 is based on R 3.4.1, a minor update to the R engine (you can see the detailed list of updates to R here. If you've had problems installing packages on Windows, this update does fix a bug that affected some users. It's also backwards-compatible with R 3.4.0 (and MRO 3.4.0), so you shouldn't encounter an new issues by upgrading. Also note that 3.4.2 is also around the corner (MRO 3.4.2 will be released in October).

We hope you find Microsoft R Open useful, and if you have any comments or questions please visit the Microsoft R Open forum. You can follow the development of Microsoft R Open at the MRO Github repository. To download Microsoft R Open, simply follow the link below.

MRAN: Download Microsoft R Open

Categories: Latest Microsoft News

Simplifying transition from Hybrid MDM (ConfigMgr+Intune) to Intune standalone

Latest Microsoft Server Management News - Thu, 09/14/2017 - 12:23

We have heard repeatedly from our customers who are using System Center Configuration Manager connected with Microsoft Intune (hybrid MDM) that theyd like to move to a cloud-only experience with Intune on Azure. This experience brings many new benefits, such as large scale, unified admin console, RBAC, and more. To help customers easily transition, were introducing a new process of moving from hybrid MDM to Intune standalone.

Previously, the move from hybrid MDM to Intune standalone required a one-time authority switch that would move an entire tenant at once and force the admin to reconfigure all settings in Intune, including re-enrolling all devices. Our new approach will allow customers to move from hybrid MDM to Intune standalone in a more controlled manner without impacting end users. The new process consists of three parts: Microsoft Intune Data Importer, mixed authority, and an improved MDM authority switch.

Microsoft Intune Data Importer

One of the biggest hurdles with the process of moving from hybrid MDM to Intune standalone has been the need to recreate all the profiles, policies and apps targeted to users and devices. Microsoft Intune Data Importer is a new downloadable tool designed to automatically copy MDM data created in Configuration Manager to an Intune environment. Importable objects include configuration items, certificate profiles, email profiles, VPN profiles, Wi-Fi profiles, compliance policies, terms and conditions, and apps.

Because Active Directory (AD) groups can be synced to Azure AD groups, deployments for the imported objects can be imported if the user collections in Configuration Manager are based on AD groups. Deployments will appear as assignments in the Intune console.

Microsoft Intune Data Importer is available for download through GitHub. You can leave feedback for the tool there as well. We are continuing to add support for new settings, and your feedback will help us make improvements for future releases.

Mixed authority

The second big change in the migration process is what we refer to as mixed authority. Mixed authority allows admins to selectively migrate users from hybrid MDM to Intune standalone in a phased, controlled manner. This means that you can migrate some groups of users to Intune standalone while you continue to use hybrid MDM for the remaining users and devices. Once a user has been moved to Intune standalone, that user and all of their devices will be managed from the Intune on Azure console. You can then create and deploy policies, initiate remote actions, and enroll new devices as if this user were part of an Intune standalone environment. Tenant level policies, such as your iOS APNs certificate, will function for users and devices managed by both hybrid MDM and Intune standalone and will only be editable via the Configuration Manager console while in mixed authority mode.

With mixed authority, all tenants will use the Intune on Azure console and will not need to use the legacy Silverlight-based console for MDM management of migrated users. This new capability will be rolled out starting today. You will be notified through the Office 365 Message Center when your tenant is enabled for mixed authority. Note: the Silverlight-based console will still be required for tenants using the Intune PC client. Tenants using the Intune PC client will take longer to be enabled.

Putting it together

Microsoft Intune Data Importer tool and mixed authority are two pieces of the new migration strategy. We recommend running Microsoft Intune Data Importer first and making sure all policies and configurations are in place before migrating any users. If the policies targeted to a user are the same in both consoles, there will be no impact to the end user when they migrate.

Once you are satisfied with your testing in the Intune standalone environment, you will initiate the tenant MDM authority switch through the Configuration Manager console. Because of recent changes to the MDM switch, this will migrate any remaining hybrid MDM users. All of the policies and apps that were created in the Intune on Azure portal, as well as your tenant level policies, will be migrated and available for configuration in the Intune on Azure console. Enrolled devices will not be required to re-enroll.

We are really excited about the release of these new tools and cant wait for customers moving to Intune standalone to have a smoother, more predictable experience. You can learn how to use this new functionality in our detailed documentation.

 

Categories: Latest Microsoft News

HOTFIX: Clients cannot download peer cache content in Configuration Manager version 1706

Latest Microsoft Server Management News - Thu, 09/14/2017 - 12:05

After you upgrade to Configuration Manager current branch version 1706, clients may not be able to download content from peer cache sources.  We have released a hotfix that resolves this issue.  This is a targeted hotfix and will be available in the Updates and Servicing node of the Configuration Manager console for sites that need it.

For the latest information about the issue and how to install the hotfix, please see the following:

4042345Clients cannot download peer cache content in Configuration Manager version 1706 (https://support.microsoft.com/help/4042345)

Categories: Latest Microsoft News

Q1 FY18 GAAP EPS UP 19% TO $0.52 and NON-GAAP EPS UP 12% TO $0.62

Latest Oracle Database News - Thu, 09/14/2017 - 12:04
Press Release Q1 FY18 GAAP EPS UP 19% TO $0.52 and NON-GAAP EPS UP 12% TO $0.62 Q1 FY18 Cloud Revenues Up 51% to $1.5 Billion and Total Revenues Up 7% to $9.2 Billion

Redwood Shores, Calif.—Sep 14, 2017

Oracle Corporation (NYSE: ORCL) today announced fiscal 2018 Q1 results. Total Revenues were up 7% from the prior year to $9.2 billion. Cloud plus On-Premise Software Revenues were up 9% to $7.4 billion. Cloud Software as a Service (SaaS) revenues were up 62% to $1.1 billion. Cloud Platform as a Service (PaaS) plus Infrastructure as a Service (IaaS) revenues were up 28% to $400 million. Total Cloud Revenues were up 51% to $1.5 billion.

GAAP Operating Income was up 7% to $2.8 billion and Operating Margin was 31%. Non-GAAP Operating Income was up 11% to $3.8 billion and non-GAAP Operating Margin was 41%. GAAP Net Income was up 21% to $2.2 billion, while non-GAAP Net Income was up 14% to $2.7 billion. GAAP Earnings Per Share was up 19% to $0.52, while non-GAAP Earnings Per Share was up 12% to $0.62.

Short-term deferred revenues were up 9% compared with a year ago to $10.3 billion. Operating cash flow on a trailing twelve-month basis was up 8% to $14.8 billion.

“The sustained hyper-growth in our multi-billion dollar cloud business continues to drive Oracle’s overall revenue and earnings higher and higher,” said Oracle CEO, Safra Catz. “In Q1, total revenues were up 7%, GAAP EPS was up 19%, and non-GAAP EPS was up 12%. Oracle is off to a very, very strong start in FY18.”

“With SaaS revenue up 62%, our cloud applications business continues to grow more than twice as fast as Salesforce.com,” said Oracle CEO, Mark Hurd. “ERP is our largest and most important cloud applications business. We now have about 5,000 Fusion ERP customers plus 12,000 NetSuite ERP customers in the Oracle Cloud. That’s 30 times more ERP customers than Workday.”

“In a couple of weeks, we will announce the world’s first fully autonomous database cloud service,” said Oracle Chairman and CTO, Larry Ellison. “Based on machine learning, the latest version of Oracle is a totally automated “self-driving” system that does not require human beings to manage or tune the database. Using AI to eliminate most sources of human error enables Oracle to offer database SLA’s that guarantee 99.995% reliability while charging much less than AWS.”

The Board of Directors also declared a quarterly cash dividend of $0.19 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on October 11, 2017, with a payment date of October 25, 2017.

Q1 Fiscal 2018 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q1 results and Fiscal 2018 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 82330974.

Contact Info Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our cloud applications business compared to competitors and the announcement regarding our new autonomous database cloud service, are all "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our products and services are compromised or if our products and services contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and reduced sales. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of September 14, 2017. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Follow Oracle Corporate

Categories: Latest Oracle News

Q1 FY18 GAAP EPS UP 19% TO $0.52 and NON-GAAP EPS UP 12% TO $0.62

Latest Oracle Press Releases - Thu, 09/14/2017 - 12:04
Press Release Q1 FY18 GAAP EPS UP 19% TO $0.52 and NON-GAAP EPS UP 12% TO $0.62 Q1 FY18 Cloud Revenues Up 51% to $1.5 Billion and Total Revenues Up 7% to $9.2 Billion

Redwood Shores, Calif.—Sep 14, 2017

Oracle Corporation (NYSE: ORCL) today announced fiscal 2018 Q1 results. Total Revenues were up 7% from the prior year to $9.2 billion. Cloud plus On-Premise Software Revenues were up 9% to $7.4 billion. Cloud Software as a Service (SaaS) revenues were up 62% to $1.1 billion. Cloud Platform as a Service (PaaS) plus Infrastructure as a Service (IaaS) revenues were up 28% to $400 million. Total Cloud Revenues were up 51% to $1.5 billion.

GAAP Operating Income was up 7% to $2.8 billion and Operating Margin was 31%. Non-GAAP Operating Income was up 11% to $3.8 billion and non-GAAP Operating Margin was 41%. GAAP Net Income was up 21% to $2.2 billion, while non-GAAP Net Income was up 14% to $2.7 billion. GAAP Earnings Per Share was up 19% to $0.52, while non-GAAP Earnings Per Share was up 12% to $0.62.

Short-term deferred revenues were up 9% compared with a year ago to $10.3 billion. Operating cash flow on a trailing twelve-month basis was up 8% to $14.8 billion.

“The sustained hyper-growth in our multi-billion dollar cloud business continues to drive Oracle’s overall revenue and earnings higher and higher,” said Oracle CEO, Safra Catz. “In Q1, total revenues were up 7%, GAAP EPS was up 19%, and non-GAAP EPS was up 12%. Oracle is off to a very, very strong start in FY18.”

“With SaaS revenue up 62%, our cloud applications business continues to grow more than twice as fast as Salesforce.com,” said Oracle CEO, Mark Hurd. “ERP is our largest and most important cloud applications business. We now have about 5,000 Fusion ERP customers plus 12,000 NetSuite ERP customers in the Oracle Cloud. That’s 30 times more ERP customers than Workday.”

“In a couple of weeks, we will announce the world’s first fully autonomous database cloud service,” said Oracle Chairman and CTO, Larry Ellison. “Based on machine learning, the latest version of Oracle is a totally automated “self-driving” system that does not require human beings to manage or tune the database. Using AI to eliminate most sources of human error enables Oracle to offer database SLA’s that guarantee 99.995% reliability while charging much less than AWS.”

The Board of Directors also declared a quarterly cash dividend of $0.19 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on October 11, 2017, with a payment date of October 25, 2017.

Q1 Fiscal 2018 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q1 results and Fiscal 2018 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 82330974.

Contact Info Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our cloud applications business compared to competitors and the announcement regarding our new autonomous database cloud service, are all "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our products and services are compromised or if our products and services contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and reduced sales. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of September 14, 2017. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Follow Oracle Corporate

Categories: Latest Oracle News

Managed Service Identities and Azure AD: Helping Azure developers keep their secrets secret!

Latest Microsoft Server Management News - Thu, 09/14/2017 - 11:05

Howdy folks,

Just a quick note today! I am excited to announce a preview of a new integration between Azure and Azure Active Directory that is designed to make life easier for developers. It’s called Managed Service Identity, and it makes it simpler to build apps that call Azure services.

Typically, to call a cloud service you need to send a credential (i.e. an API key or the like) to authenticate your app. Managing these credentials can be tricky. They are, by definition, secrets! You don’t want them to show up on dev/ops workstations or get checked into source control. But they must be available to your code when your code is running.

So how do you get them there without anyone seeing them? Managed Service Identities!

Managed Service Identities simplifies solves this problem by giving a computing resource like an Azure VM an automatically-managed, first class identity in Azure AD. You can use this identity to call Azure services without needing any credentials to appear in your code. If the service you are calling doesn’t support Azure AD authentication, you can still use Managed Service Identity to authenticate to Azure Key Vault and fetch the credentials you need at runtime. Presto, no credentials in code!

You can read more about the Managed Service Identity preview on the Azure blog.

Happy coding!

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Categories: Latest Microsoft News

Working with data frames in SQL Server R Services

Latest Microsoft Data Platform News - Thu, 09/14/2017 - 09:56

Most R users are quite familiar with data frames: the data.frame is the fundamental object type for working with columnar data in R. But for SQL Server users, the data frame is an important concept to understand, since it will be the main object type in R used to store data from SQL tables. This guide to working with data frames with SQL Server R Services provides the basic concepts of R data frames and how to generate them and manipulate them within a SQL Server procedure.

If you find that article useful, you might also find these prior articles from the series useful as well:

Redgate Hub: SQL Server R Services: Working with Data Frames

Categories: Latest Microsoft News

Sneak peek #4: Introducing Project “Honolulu”, our new Windows Server management experience

Latest Microsoft Server Management News - Thu, 09/14/2017 - 09:00

This blog post was authored by Samuel Li, Principal Program Manager Lead, Windows Server.

Today, we are thrilled to unveil the next step in our journey for Windows Server graphical management experiences. In less than two weeks at Microsoft Ignite, we will launch the Technical Preview release of Project “Honolulu”, a flexible, locally-deployed, browser-based management platform and tools.

Project “Honolulu” is the culmination of significant customer feedback, which has directly shaped product direction and investments. With support for both hybrid and traditional disconnected server environments, Project “Honolulu” provides a quick and easy solution for common IT admin tasks with a lightweight deployment.

This blog post continues our recent “sneak peek” series, and we highly recommend “Honolulu” as a graphical management solution for Windows Server, version 1709, and several other versions of Windows Server too!

Yes, we (still) love GUI tools!

If you’re a longtime Windows Server IT admin, you probably “grew up with” and still regularly use MMC and other in-box GUI tools for some management and administrative tasks. You probably also have some level of PowerShell expertise since scripting and automation have become increasingly important as our industry evolves and embraces cloud-scale concepts and deployments.

For scripting and automation, Windows Server has done a great job of providing PowerShell coverage, while for graphical management, cloud-hosted solutions like Operations Management Suite (OMS) are providing added value for larger scale and hybrid environments. Still, IT admins have repeatedly told us that PowerShell is necessary but not sufficient, and that Windows Server ease-of-use is still largely dependent on GUI tools for core scenarios and new capabilities.

Indeed, as you’ll see with Project “Honolulu”, we will continue to invest in GUI tools. Whether it’s for scenarios where GUI has an inherent advantage like data visualization or comparison, or for ad hoc configuration or troubleshooting, we will evolve and expand core GUI tools that are complementary to investments in PowerShell and larger scale management solutions like OMS.

Modernized, simplified, integrated, and secure experiences

Project “Honolulu” is the next step in our journey to deliver on our vision for Windows Server graphical management experiences.

Our vision starts with modernizing both the platform and the tools. For us, modernizing the platform means giving users greater flexibility in how and where they deploy and access the tools. Modernizing the platform also enables partners, both internal and external, to leverage and easily build on top of a growing ecosystem of tools and capabilities. For platform adoption and growth, it means supporting a reasonable set of existing Windows Server versions, not just the latest, and licensed as part of Windows Server with no extra cost. Modernizing the graphical management platform reduces the friction of creating modernized admin tools.

Our vision continues with simplifying the experience where appropriate. Deployment is quick and easy, with no Internet dependency. Tools are familiar, and cover the core set of administrative tasks for troubleshooting, configuration, and maintenance. Some Windows Server capabilities, which were previously manageable only via PowerShell, now also have an easy-to-use graphical experience.

Our vision also includes integrating the management experiences in compelling ways. Each tool is available not just together in one place, but can be filtered to show contextual data inside of another tool. One tool can link to another with context, and these links are just URLs which can be launched from external sources. The architecture also allows for cloud integration in the future.

Finally, our vision is to deliver a secure platform, helping solutions be secure by default, and optimizes support for security solutions. A future blog post in this series will describe in more detail what we’re doing with security and assurance.

Project “Honolulu” Technical Preview – coming (very) soon!

By late next week we’ll publish the Project “Honolulu” Technical Preview package for everyone to install and use. We’ll update this blog post with the download link once it’s available. Stay tuned!

If you are registered to attend Ignite, we invite you to come visit our station in the Hybrid Platform area of the expo, and join us at our breakout session where we’ll provide a lot more details about Project “Honolulu”, along with plenty of demos:

The screenshot below is a sneak peek at the scope of core tools in the Server Manager solution in Project “Honolulu”:

Hyper-converged infrastructure management (early preview)

One of the most exciting new tools we’re previewing as part of Project “Honolulu” Technical Preview is a brand-new solution for managing hyper-converged clusters (Hyper-V with Storage Spaces Direct). Together in one modern, simple, and integrated experience, provision and manage VMs and volumes, see drives, servers, and their health status across the cluster. See historical and real-time performance charts for cluster-wide CPU/memory/network usage and storage IOPS, throughput, and latency, then drill in to see metrics for individual VMs, volumes, and drives.

Again, if you’ll be at Ignite, we also invite you to join us at our second breakout session where we’ll dive into the HCI UX in Project “Honolulu” and show lots of demos:

The screenshot below is a sneak peek at the scope of the Hyper-Converged Cluster Manager solution in Project “Honolulu”:

Customer quotes

Project “Honolulu” has been in Private Preview for the past 5 months with several version updates and around 150 users. With their permission, we are happy to share some of their comments with you.

“The first reaction from many people in our organization was: ‘That’s the tool we’ve been waiting for!’”

– Rick Kutschera, Engineering Manager, bwin.party Digital Entertainment

“Very easy and straight forward deployment. Light weight installer package makes the Honolulu install very quick.”

– Muruganantham Raju, Systems Engineer, Intel

“Managing GUI-less server has never been easier.”

– Thomas Maurer, Cloud Architect, itnetX

“Honolulu has REALLY helped us gain centralized visibility and simplified management of our Windows servers (both on-prem and in Azure) in ways we were never able to do before! We have LOVED the frequent release of updates for Honolulu that builds upon the feature sets that help us better manage and administer our environment. Honolulu has gone from being ‘good’ to being ‘invaluable’ in our management and administration strategy for our servers!”

– Rand Morimoto, President, Convergent Computing (CCO)

“I use Honolulu to manage hyperconverged cluster based on Storage Spaces Direct and what a tool. Honolulu provides me metrics, alerts and ease of management from a web interface. I can now easily manage my volume and get the state of my infrastructure. Honolulu looks like really flexible and Microsoft can add features really quickly.”

– Romain Serre, Technical Architect, AVA6

“The advanced dashboard offers performance view, events, alerts and all the information that all IT Pro need.”

– Silvio Di Benedetto, Founder and Senior Consultant, Inside Technologies

“I really loved SMT in Azure but there was always the concern that it’s running online. The cloud connection was a showstopper for many of my customers. But with this new solution we really address the needs of IT.”

– Eric Berg, Principal IT Architect, COMPAREX AG

“Try Honolulu, even if you have established management tools. It is easy and fast to setup, needs no client, does no harm and provides fast information and tools. You will see, that you will love it and start doing things with it. Start small and fast and you will accelerate.”

– Jeron Mehl, Department Manager IT Services, FZI Forschungszentrum Informatik

Project “Honolulu” is designed for you, and with you! Customer feedback has been invaluable, and as we launch Project “Honolulu” Technical Preview publicly, ongoing feedback from all of our valued users and partners will continue to be crucial in helping us prioritize and sequence future investments.

Don’t forget to sign up for the Windows Insiders program to get early access to preview builds and the Microsoft Tech Community so you can join the conversation!

See you at Ignite!

Check out other blogs in this series:

Categories: Latest Microsoft News

How to configure ASR in Windows Server Essentials 2016

Latest Microsoft Server Management News - Wed, 09/13/2017 - 11:54

Hello Windows Server Essentials friends,

Windows Server Essentials (or the Essentials Experience role found in Windows Server Standard or Datacenter) can be leveraged to quickly provision and enable a full Disaster Recovery site in the cloud using built-in Microsoft Azure integration features. The solution is composed of two Azure products:

  1. Azure Virtual Network:  https://azure.microsoft.com/en-us/services/virtual-network/
  2. Azure Site Recovery: https://azure.microsoft.com/en-us/services/site-recovery/

Using Windows Server Essentials wizards and setting up the hardware in a working configuration can be challenging and there are a few steps and prerequisites to consider before deploying the server.

Two of our bright stars in the Windows Server Essentials community are Daniel Santos and Alex Fields. Alex is from Minneapolis and he wrote a new series of blogs (leveraging Daniels’s great analysis work) to help folks navigate through the configuration of the hardware and the Windows Server Essentials operating system to enable this disaster recovery solution.

Check out Alex’s blog series on itpromentor.com for a detailed rundown of the configuration steps:

Part 1: Pre-requisites      https://www.itpromentor.com/asr-wse-part-1/

Part 2: Configuring          https://www.itpromentor.com/asr-wse-part-2/

Part 3: Failover and failback    https://www.itpromentor.com/asr-wse-part-3/

 

 Here is what the topology looks like:

 

Cheers and good luck!

Scott M. Johnson
Senior Program Manager
Windows Server Essentials
@SuperSquatchy

 

 

 

 

 

Categories: Latest Microsoft News

Sneak peek #3: Windows Server, version 1709 for developers

Latest Microsoft Server Management News - Wed, 09/13/2017 - 09:00

This blog post was authored by Patrick Lang, Senior Program Manager, Windows Server.

Windows 10 and Windows Server are built on customer feedback. We use the tools built right into Windows to maintain a continuous cycle of feedback from our customers used to drive refinements and new features. After the launch of Windows Server 2016, we continued to listen to our customer feedback channels and the repeated message we heard is that you want access to new Windows Server builds more frequently to test new features and fixes. First, we announced that Windows Server would be joining the Windows Insider program so you can download and test the latest builds. However, previews alone aren’t enough so we launched the Windows Server Semi-annual Channel to ship supported releases twice per year, and this fall will be the first release on that cadence.

Since the launch of Windows Server 2016, container adoption has skyrocketed, with many customers using a “lift and shift” approach to migrate existing applications and start the journey to modernize their deployments. Hyper-V can also provide unprecedented isolation between containers, and you can leverage your existing Active Directory infrastructure for apps in containers with Group Managed Service Account support.

We heard loud and clear that developers need a platform that provides great density and performance as well as flexibility to run containerized applications. Here’s a glimpse on what’s coming in Windows Server, version 1709 for developers:

Faster downloads, builds, and deployments with Nano Server container image

In the Windows Server Semi-Annual Channel, we’ve optimized Nano Server as a container base OS image and decreased it from 390 MB to 80 MB. That’s a nearly 80% savings! This gives developers a much smaller image ideal for building new applications or adding new services to existing applications.

We launched Windows Server containers with a getting started guide and open-source documentation on GitHub. The community response has been excellent, and we’ve had over 150 people share their expertise and contribute back. Check out our documentation page to learn more. For those of you who joined the Windows Insiders program, you can also check out the documentation on how to use containers with Insider images.

Linux containers

We knew developers were eager to run any container, Windows or Linux, on the same machine. The crowd went wild when we announced this at Dockercon earlier this year and it showed how much demand there was for this work. This feature uses Hyper-V isolation to run a Linux kernel with just enough OS to support containers. Since then, we’ve been hard at work building this technology with new functionality in Hyper-V,  joint work with the Linux community, and contributions to the open source Moby project on which Docker technology is built. Now it’s time to share a sneak peek of how to run Linux containers and start getting feedback on how it’s working for Windows Insiders.

You can get started with these features right away as a Windows Insider. To try this out, you’ll need:

  • Windows 10 or Windows Server Insider Preview build 16267 or later
  • A build of the Docker daemon based off the Moby master branch
  • Your choice of compatible Linux image

Our joint partners have published guides with steps on how to get started:

More to come!

Of course, this is just a glimpse on the news for developers in this release. We have a bunch more we’ll talk about in the blogs to come. Keep an eye out for other blogs in this series and join the Windows Insiders program to have access to the preview releases. Feedback is always welcome! Please use the Windows Feedback tool Hub if you’re a Windows 10 Insider, or join us at the Windows Server Insiders Tech Community.

Check out other blogs in this series:

Categories: Latest Microsoft News

Oracle Joins the Cloud Native Computing Foundation

Latest Oracle Database News - Wed, 09/13/2017 - 09:00
Press Release Oracle Joins the Cloud Native Computing Foundation Continues Open Source Commitment with Release of Kubernetes on Oracle Linux and Terraform Kubernetes Installer for Oracle Cloud Infrastructure

OpenSource Summit North America, Los Angeles—Sep 13, 2017

Oracle announced today that it has joined the Cloud Native Computing Foundation (CNCF) as a Platinum Member. In addition, Oracle is releasing Kubernetes on Oracle Linux and open sourcing a Terraform Kubernetes Installer for the next-generation Oracle Cloud Infrastructure. As such, developers gain unparalleled simplicity for running their cloud native workloads on Oracle.

As enterprises accelerate how they build and deploy mission-critical applications, development teams are seeking an open, cloud-neutral, container-native technology stack that avoids lock-in. By joining CNCF, Oracle is demonstrating its support for this effort as well as the Kubernetes open-source community, the core component of the CNCF technology stack.

Kubernetes is the industry leading open source container orchestration and management platform rapidly emerging as the standard for containerized applications. Deploying Kubernetes can be complex, especially with regards to storage, security and networking. By open sourcing its fully-supported, automated Kubernetes Terraform template for the Oracle Cloud Infrastructure, Oracle is helping developers avoid these challenges and easily install, run and manage Kubernetes-based container apps with the extreme performance of bare metal.

Additionally, Oracle is releasing Oracle Linux Container Services for use with Kubernetes, simplifying the configuration and setup of Kubernetes for any environment on Oracle Linux: public cloud, private cloud, and customer on-premises. This solution has been developed as an extension to Oracle Linux and Oracle Linux Container Services for Docker, with a streamlined installation model so developers can easily setup and deploy their orchestration environment in a matter of minutes.

“Nobody has more experience managing complex enterprise workloads than Oracle. By joining the CNCF, we’re making it easier for enterprises to leverage the power of container-native technology to simplify their infrastructure environments to run in true hybrid cloud mode – in any cloud,” said Mark Cavage, vice president of software development at Oracle. “CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers’ development toolchains. Together with the CNCF, Oracle is cultivating an open container ecosystem built for cloud interoperability, enterprise workloads and performance.”

“Oracle has decades of experience meeting the needs of world-class enterprises,” said Chris Aniszczyk, chief operating officer of the Cloud Native Computing Foundation. “We are excited to have Oracle join CNCF as a platinum member, and believe that their key role will help define the future of enterprise cloud.”

Today’s announcement follows a string of recent news underscoring Oracle’s commitment to open source and the container ecosystem, including dedicating engineering resources to the Kubernetes project, open-sourcing several container utilities and making its flagship databases and developer tools to the Docker Store marketplace.

Contact Info AlexShapiro
Oracle
4156085044
alex.shapiro@oracle.com About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

AlexShapiro

  • 4156085044

Follow Oracle Corporate

Categories: Latest Oracle News

Managed Services

If you do not understand your computer systems or your organization has computer maintenance needs, contact us and we'll schedule a free consultation to offload the computer headaches from you.

View details »

Database Administration

Need some assistance with a database or database maintenance? We can help with that and anything else related to databases.

View details »

Virtualization

Not sure how this could help you? Let us explain the benefits of virtualization and be sure you are using your hardware properly.

View details »