https://univold.com/proofpoint-cloud-app-security-broker-iaas-protection/
Proofpoint PFPT CASB IAAS PROTECTION 5GB DAILY TRAFFIC PER ACCT DAY - S PP-B-IAAS-S-A-109
@Proofpoint #PFPT #CASB #IAAS #PROTECTION #5GB #DAILY #TRAFFIC @Univold1
wenn ihr bei #unplugtrump nur an user-facing apps und dienste denkt, denkt nochmal nach.
in D und Europa laufen quasi ALLE #saas lösungen auf #iaas #paas diensten amerikanischer konzerne, auch wenn ein teil der infratruktur lokal steht.
oder anders: stand heute gibt es KEINE cloud aus D/Europa.
The Return of Infrastructure Independence: Breaking Free from US Hyperscalers
In the rapidly evolving landscape of technology, we sometimes find ourselves experiencing a sense of déjà vu. The current state of cloud computing and infrastructure management feels remarkably similar to the late 1990s server market—a time of major technological transition that ultimately rewarded those who maintained traditional expertise.
The Great Windows Server Migration of the Late ’90s
Cast your mind back to the late 1990s. Windows NT was gaining significant traction in the enterprise server space. Microsoft’s marketing machine was in full swing, promoting Windows as the future of server technology. The interface was familiar, the management tools were accessible, and the promise was enticing: simplify your infrastructure and reduce costs.
Many companies bought into this vision. They let go of their Unix administrators—the wizards who understood the deep intricacies of system architecture—and pivoted toward the seemingly more accessible Windows ecosystem. Unix expertise was deemed outdated, a relic of computing’s past.
But then something unexpected happened: Linux emerged as a powerful force. This open-source Unix-like operating system combined the robustness of traditional Unix with modern development approaches. Companies that had maintained their Unix expertise found themselves with a significant competitive advantage, while those who had discarded that knowledge scrambled to adapt.
Today’s Dangerous Dependency on US Hyperscalers
Fast forward to today, and we’re witnessing a similar phenomenon, but with far greater geopolitical implications. The cloud market has become dominated by a handful of US-based hyperscalers: AWS, Azure, and Google Cloud Platform. These giants now control the backbone of global digital infrastructure, creating an unprecedented level of dependency.
Organizations worldwide have entrusted their mission-critical systems, data, and intellectual property to these American corporations. This concentration of digital power in the hands of a few US companies presents significant risks:
Today’s developers and systems engineers often have limited exposure to building and maintaining independent infrastructure stacks. The knowledge of creating self-sufficient, sovereign digital platforms has been sacrificed at the altar of convenience offered by the hyperscalers.
The Coming Era of Regional Digital Sovereignty
As geopolitical tensions rise and concerns about surveillance escalate, we’re approaching a breaking point that parallels the Linux revolution of the early 2000s. The excessive centralization of cloud infrastructure in the hands of US corporations is becoming increasingly untenable for many regions and organizations around the world.
Europe, in particular, stands at a crossroads. With its strong regulatory framework through GDPR and emphasis on digital sovereignty, the continent has the potential to lead a shift toward regional cloud infrastructure. A “European Cloud” built on open standards and operated independently of US hyperscalers could provide a template for other regions seeking digital autonomy.
This is where those 50+ year-old systems engineers—the ones who understand how to build infrastructure from the ground up—will become invaluable again. Their knowledge of architecting complete technology stacks without reliance on hyperscaler ecosystems will be crucial as organizations and regions work to establish independent digital capabilities.
Building Regional Digital Independence
The path to reducing dependency on US hyperscalers requires:
The Role of Experienced Infrastructure Engineers
The systems engineers who remember a world before AWS, Azure, and Google Cloud will play a pivotal role in this transition. Their experience building and managing independent data centers, designing network architectures without reliance on hyperscaler services, and understanding the full technology stack from hardware to application will be essential.
These veterans know what it takes to build robust, independent infrastructure. They understand the pitfalls, requirements, and strategic considerations that younger engineers, raised entirely in the hyperscaler era, may overlook.
Conclusion
The technology industry has always moved in cycles. What seems obsolete today may become critical tomorrow. Just as Linux vindicated those Unix administrators who maintained their expertise through the Windows NT revolution, the growing movement toward digital sovereignty could similarly elevate those who’ve preserved their knowledge of building independent infrastructure.
As regions like Europe work to establish their own cloud ecosystems and reduce dependency on US hyperscalers, the experienced systems engineers who understand how to build truly independent technology stacks will become not just relevant, but essential to our digital future.
The coming years may well see a renaissance of regional infrastructure expertise, as organizations and nations alike recognize that true digital resilience requires breaking free from excessive dependency on the American tech giants that currently dominate our global digital landscape.
See also: https://berthub.eu/articles/posts/you-can-no-longer-base-your-government-and-society-on-us-clouds/
I want to design a Wireguard gateway hosted on a IaaS. The list of authorized public keys is published in a LDAP directory hosted in a private subnet. The IaaS provider provides a Terraform provider, supports user-data and is compatible with create_before_destroy. The list of public keys is expected to rarely change, and changes are expected to be enacted in less than 24h.
I tried to implement this using an immutable approach: the LDAP directory is queried by Terraform/OpenTofu, and the list of public keys is compiled in a cloud-init compatible user-data field. Cloud-init is in charge of generating files, installing packages and deploying the config during startup of a vanilla Ubuntu image. If the generated user-data field changes, the VM is destroyed and recreated with the updated config. The create_before_destroy directive ensures minimal yet non-zero downtime, with the gateway "public" IP being migrated to the new VM at the end of the plan.
For me, the benefits of this approach is that the LDAP server remains unknown and unreachable from the VPN gateway, the list of authorized public keys is static, stable and easily auditable (one just needs to look at the user-data), and risk of configuration drift is minimal (since there is no live reconfig and the instance is destroyed with every change).
People around me seem unconvinced by this approach, waving at a vague availability risk, and preferring a mutable approach, where a systemd timer would poll the directory and update in place the Wireguard config.
What is your opinion on this? I am fairly new to Terraform/OpenTofu, so I might be missing a clue here.
Guess I've been on here long enough to do an #introduction I'm Chris, and I live in #Bermuda
I'm married with two kids, a son and a daughter.
My hobbies include #mountainbiking #gaming (including #TTRPG #DnD and #MagicTheGathering) and #spearfishing.
I work currently as an #IaaS Support Engineer at a #cloud and #datacentre provider. I do everything from setting up virtual environments, and orchestration through #SaltStack to #DevOps and managing our #Networking and #Switching platform.