Many application architectures can be found in today’s IT: Central data centres supplemented by local IT resources, or entirely distributed resources. David Walker from Yugabyte explains how the IT architecture must evolve because of the edge dynamics triggered by IoT and 5G.
We are currently experiencing a new surge in development in the data centres and thus in the IT architecture. The trigger is the transition from first to second-generation cloud implementations; the fuel is provided by the increasing spread of 5G and IoT applications. The first generation was about internalising the use of the cloud and “somehow” identifying databases and applications that were suitable for this new environment. Now that the use of the cloud has become established, the user companies are gradually discovering its downside: they are tied to their cloud provider. And because the IoT increases the volume of data, the costs also increase.
In an attempt to break the cloud provider monopoly, some organisations are placing specific classes of applications in particular clouds, such as Azure for Office, AWS for OLTP, and GCP for analytics. This allows users to reduce their dependence on cloud providers somewhat. But users are still dependent on a cloud provider for each application class. Three main characteristics will characterise the development of a second-generation IT architecture for cloud use:
Distribute Applications Across Multiple Cloud Providers
The use of applications, especially databases, should be distributed across multiple cloud providers. It will not be enough to distribute applications across multiple clouds – they must be distributed across both clouds and their own in-house IT capacities. This allows organisations to bring processing and storage closer to the user and offload it to the most cost-effective platform for the type of workload. Organisations using vendor-specific databases from cloud operators will need to reconsider this approach. Users looking to migrate their stateless, cloud-native microservices across cloud providers need a data layer, mainly a database supported across clouds.
IT architecture: Use Of Three Data Centres In Different Regions
The second trend is using three data centres in different regions instead of two. In the past, it was usual to operate two data centres, one active and one passive, but in the future, the use of distributed databases at three locations should take the place of this concept. If one of them fails, the services are still maintained. Operating in Frankfurt, London and Dublin is no more difficult than working in two of those locations.
It’s not much more complex, but it’s certainly much more robust. And it’s easy to implement with the technologies now available. This approach also better supports regulatory compliance and data security. For example, it makes it possible to store data exclusively in one of three centres in the US, the EU, and the rest of the world to comply with legal requirements while keeping it in a single global database.
IT Architecture: Agnostic APIs For Data Storage
The third trend is the move to agnostic APIs for data storage. I predict that the Postgres API will prevail for SQL databases and Cassandra for no-SQL databases as block storage moves closer to the S3 programming language. This amounts to a de facto unification of APIs for data storage. This is important because it removes a barrier that hampers scaling and prevents organisations from moving between different data plane platforms.
The commoditization of the data centre is already underway. The three trends mentioned are the inevitable result of users’ desire to scale their IT architecture cost-effectively. And without having to bind yourself to a provider. Providers will not give up their lock-in advantages without a fight. Ultimately, however, the approach that promises users more independence will prevail. This evolution will bring with it a plethora of innovations. Not just at the edge but also in the way we build our data centres of the future.