Openshift HCP Architecture - Multi Level Network Access


Ever tried searching for Kubernetes reference architectures and only found “Hello World” examples? You know the ones: “Here is how to deploy a pod.” Thanks, but I need to build a fortress, not a tent.

This post is a walkthrough of how I actually design clusters when the environment isn’t just “the cloud.”

The Infrastructure

Let’s be real: deploying a cluster at a Hyperscaler is basically “Credit Card Driven Development.”

~ Extremely simplified ~

  1. Create Account
  2. Enter Credit Card details (and hope you don’t bankrupt the company)
  3. Press “Deploy”
  4. Go get coffee

While that’s convenient, I have a masochistic love for working On-Prem, Disconnected, and Airgapped. You know, places where “automation” is sometimes just a guy named Dave typing really fast.

This is where the fun begins:

  • Compute: Where do I actually put this thing? Baremetal? Virtualized? A Raspberry Pi stack under someone’s desk?
  • Network: The veins of the beast.
  • DNS: It’s always DNS. Except when it’s BGP. But it’s usually DNS.
  • Firewalls: How strict are they? Do I need to fill out a spreadsheet with 5,000 rows to open port 443? (I secretly love creating connection matrices).
  • Registry Access: Can I reach the outside world? Or do I need to mirror the entire internet to a local registry via USB stick?
  • Identity: Is there a central SSO/LDAP, or are we using sticky notes for passwords?

The Scenario (A.K.A. “The Dream”)

For the sake of this post—and to keep my blood pressure in check—let’s make some bold assumptions:

  • Compute: We are deploying on Baremetal (heavy metal!).
  • Security: We need to separate access on the network level (because we trust no one).
  • DNS: It works magically. No, really. Don’t ask how.
  • Firewalls: We have super-smart AI firewalls that intuitively know what traffic is necessary and allow it. (If only…)
  • Connectivity: We can access all Internet Registries.
  • Auth: We have a working SSO.

Now that we’ve established our utopia, let’s look at the network.

The Players

We have a few different characters in our story, and they all need their own safe space (zone) to access the cluster:

  1. The Platform Admin: The wizard behind the curtain.
  2. The Application Admins: The folks who actually deploy stuff.
  3. The Application Users: The people who complain when the stuff is slow.

The Architecture (Or: How to Build a Cloud at Home)

While we might mock the simplicity of Hyperscalers, deep down, we secretly want to build that same beautiful, automated beast ourselves.

I honestly don’t want to know what “left-shifted” Java monstrosity the Application Team is planning to deploy. I just want to give them a place to break it safely.

So, we’re going to use the shiny new toys from Red Hat’s portfolio: Hosted Control Planes with KubeVirt (aka HyperShift).

In plain English: We are deploying OpenShift clusters inside OpenShift clusters and managing them all from an Advanced Cluster Management (ACM) hub. It’s clusters all the way down.

Here is the access breakdown:

  • Platform Admin: Needs the keys to the kingdom—access to the Baremetal Nodes and the Hub Cluster.
  • Application Admin: Needs access to their specific Hosted Cluster (and nothing else).
  • User: Just needs to reach the Application (and hopefully not the admin panel).

Let’s visualize this.

So, how do we slice this pie? We’re carving out four distinct network access zones:

  1. Private Backend (Platform Admin): The “Do Not Touch” zone. Direct access to the physical infrastructure (SSH, BMC, etc.).
  2. Public Backend (Platform Admin): The Control Tower. Access to the Hub Cluster management interface (API, Ingress).
  3. Tenant Backend (Application Admin): The Sandbox. Access to the specific Hosted Control Plane (API, Ingress).
  4. Public Frontend (Application Users): The Wild West. Access to the deployed applications.

To keep things sane (and secure), each of these zones gets its own network, locked down by a firewall. Good fences make good neighbors, right?

      USERS               NETWORK / FIREWALL           INFRASTRUCTURE

+----------------+               |
| Platform Admin |               |
+-------+--------+               |             +---------------------+
        |                        |             |   Baremetal Infra   |
        +-----------(1) Private Backend -----> |   [ Nodes / BMC ]   |
        |                        |             +---------------------+
        |                        |
        +-----------(2) Public Backend ------> +---------------------+
                                 |             |     Hub Cluster     |
                                 |             |   [ API / Ingress ] |
                                 |             +----------+----------+
                                 |                        |
+----------------+               |                        | Manages
|   App Admin    |               |                        v
+-------+--------+               |             +---------------------+
        |                        |             |    Hosted Cluster   |
        +-----------(3) Tenant Backend ------> |   [ API / Ingress ] |
                                 |             +----------+----------+
                                 |                        |
+----------------+               |                        | Runs
|    App User    |               |                        v
+-------+--------+               |             +---------------------+
        |                        |             |     Application     |
        +-----------(4) Public Frontend -----> |    [ Route / App ]  |
                                 |             +---------------------+
                                 |

What’s Next?

Great, we have a plan. We have boxes and arrows. But as we all know, PowerPoint architectures don’t run production workloads.

In the next post, we’ll get our hands dirty. I’ll show you how to actually implement this using MetalLB, NMState, and a healthy dose of YAML. We’ll configure the secondary interfaces, set up the address pools, and pray to the BGP gods.

Stay tuned.