As we start 2018 here are some thoughts on ongoing enterprise cloud technology trends, and how they will impact the choices solution architects need to make when designing systems.
Cloud is now firmly established as the default location for enterprise workloads, apart from in specific secure areas. However, the term “Cloud” covers many areas, leading to growing deployment choices. One area where the choice is arguably becoming less critical is the underlying infrastructure provider. Trends such as containerisation, Platform as a Service and commoditisation of cloud infrastructure increasingly means for simple compute and disc storage the underlying cloud provider matters less.
“There will be an increasing trend for hybrid deployments, with businesses having the ability to flex between different public cloud providers, private cloud or even on-premise.”
Cloud is the default, but there are growing deployment choices
There will be an increasing trend for hybrid deployments, with businesses having the ability to flex between different public cloud providers, private cloud or even on-premise. With organisations increasingly splitting solutions across cloud infrastructure vendors, for example, to take advantage of differing on-demand versus reserved pricing. SAP’s delivery of SAP Cloud Platform across AWS, Azure and Google Cloud is one manifestation of this trend.
As cloud infrastructure becomes commoditised, the critical decisions move up the stack. Infrastructure as a Service, Containers as a Service, Platform as a Service (PaaS), and Software as a Service are not discrete elements with hard lines between them, but a continuum. Architects will increasingly face hard choices as to where on that continuum is the right place for a particular solution. For example, should a bespoke app be containerised using a Docker container style model (basically a Linux image and software installed in it) and then managed with a container orchestration system such as Kubernetes, or a PaaS style where just the executable packages are deployed, and the PaaS takes care of scaling? There are pros and cons of both approaches, and different app types they are more suited to, but rarely a definitive answer based on technology alone.
Another key decision is when to use “open” versus “vendor specific” tools. A purist view would be to keep solutions as “open” and vendor neutral, and therefore as portable as possible. However, there are limits to this approach, as it can cut off access to innovative services, and also higher level services that may not be available in an open format. Therefore, many real-world solutions will be a balance between open and vendor specific.
To take a specific example, a SAP Cloud Platform solution may be written in Java (open), packaged and deployed as a Cloud Foundry build pack (open), but utilise SAP specific data quality services. The job of the solution’s architect is to get the balance right between portability and openness, and the power that can come with using a vendor’s specific services (and the tie-in to that vendor that it entails).
Regulatory and compliance are ever more important in architectural choices. Technologies like containerisation, multi-cloud abstractions and PaaS mean its never been easier to move data and systems around, but increasing focus on privacy laws, GDPR and so on, mean its never been more critical to know precisely where it is.
Therefore, vital elements of solution design become making sure data remains in the appropriate geographies, and that governance is robust. There are likely to be increasing numbers of offerings, such as the SAP Google Cloud data custodian partnership previously discussed to help architects tackle this.