How to Use Kontfix to Manage Kong Konnect Control Planes

Introduction

Konnect is Kong’s SaaS platform that enables users to manage every aspect of their APIs in a single place. Its managed gateway control planes serve as the foundation to support many other features, allowing the users to focus on deploying lightweight data planes almost anywhere.

Creating a new control plane in Konnect is very simple, just a few clicks on the UI gives you a new Control Plane. Compared to the complex setup required for on-prem deployments, there is no PostgreSQL, SSL certs, FQDNs, or load balancers needed. With the Konnect Terraform Provider, users can adopt a full GitOps approach and manage everything in CI/CD pipelines.

However, with so many tools available, one question I often get asked is: “What are the best practices for managing Konnect resources effectively?” While each organization has its own tooling and security requirements, at a high level, I recommend separating pipelines based on responsibilities. The first step is to treat the control plane as part of the infrastructure built, making sure it’s ready to receive mTLS connection from dataplane and get configuration from other pipelines.

To build out some references, I initially started writing Terraform modules. After using it for a while, driven by my passion for Nix, I decided to create a Nix module that offers many useful features out of the box, providing the declarative power to build modular and reproducible configurations. This was made possible thanks to the fantastic work of Terranix.

Here I am excited to present you Kontfix, a opinionated Nix modules that manage Konnect Control Planes and related resources. Even if you don’t use Nix or this module directly, understanding the design principles can help you decide what approach best fits your workflow.

Solution design

I normally break Konnect implementation into several high-level milestones:

  • Prepare Control Plane
  • Deploy Data Plane
  • Testing
  • Production go live

Control Plane Preparation

The first milestore involves two pipelines, as shown in below diagram.

I called the first pipeline the infrastructure build which ensures control planes are created, cluster certificates are in place, and all necessary information is stored in a secret manager to be used by other pipelines. If you already have pipelines managing infrastructure (EC2, EKS, ECS, etc.), Konnect control plane creation can be easily integrated.

The key steps include:

  • Create the control plane
  • Generate or upload certificates (CA or self-signed)
  • Store essential information (e.g., cluster_url, telemetry_url) in a secret manager
  • If running Kong in a private AWS subnet, create PrivateLink

The second pipeline handles syncing Kong configurations to the control planes. By design it should fetch the system account token from the secret manager and uses decK to convert, lint, patch, and sync configurations.

Kontfix helps the first pipeline. For the second pipeline, I wrote this blog post 3 years ago. Although it is targeting on-prem deployment, the same approach can be applied to Konnect with some adjustments.

Kontfix Design Overview

By default, Kontfix auto generates terraform variables to support managing resources.

  • cp_admin_token is used to manage Konnect control planes. You need to create a system account with Control Plane Admin team privilege.
  • id_admin_token is used to manage system accounts. You need to create a system account with Identity Admin privilege.

There are other variables will be used, for example, vault_token for HashiCorp Vault and aws_region and aws_profile for AWS related resources.

Quickstart guide

Prerequisites:
You need to have a Konnect account, initial system account token and Nix installed on your machine.

Single Control Plane

Kontfix creates relevant resources needed for your control planes. For example, kontfix.controlPlanes.au.demo = {} creates a control plane named “demo” in the au(Australia) region. The module also generates the cp_admin_token variable, konnect provider and konnect provider config for AU region.

If you need to create a self-signed certificate for this control plane, simply use kontfix.controlPlanes.au.demo.create_certificate=true and kontfix.controlPlanes.au.demo.upload_ca_certificate=true. Kontfix will create a self-signed certificate valid for 90 days and auto renewed 15 days prior expiry and then stored locally in certs/ folder.

Need a system account for this control plane? Simply set kontfix.controlPlanes.au.demo.system_account.enable=true;. The variable id_admin_token will be included in the generated config along with the system account resource.

Want to generate the system account token that can be used in the pipeline for this control plane? Just add kontfix.controlPlanes.au.demo.system_account.generate_token=true;. This option creates the token valid for 30 days and auto renewed 7 days before expiry. This token will be stored in the local tokens/ folder.

For all options we mentioned above, we can define our control plane as below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
kontfix = {
controlPlanes.au = {
test = {
create_certificate = true;
upload_ca_certificate = true;
system_account = {
enable = true;
generate_token = true;
};
};
};
};
}

Control Plane Groups

Kontfix also supports managing control plane groups and its members. Here is a quick example where we define 3 control planes, dev, platform and dev-cpg. dev-cpg is the control plane group that pulls configuration from its members. Kontfix will take care of creating all these control plane and adding the members to dev-cpg.

In this particular setup, Kontfix also generates a client certificate for the control plane group to accept connections from data planes. Additionally, it creates individual system accounts and system account tokens for the member control planes, allowing their configurations to be synced independently in the decK pipelines.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
kontfix = {
controlPlanes = {
au = {
dev = {
system_account = {
enable = true;
generate_token = true;
};
};
platform = {
system_account = {
enable = true;
generate_token = true;
};
};
dev-cpg = {
create_certificate = true;
upload_ca_certificate = true;
cluster_type = "CLUSTER_TYPE_CONTROL_PLANE_GROUP";
members = [
"dev"
"platform"
];
};
};
};
};
}

Logical groups

It is very common to create a set of control planes for each environment. For example, you might have dev, staging, QA, Prod and each has several control planes. Instead of creating system account and tokens per control plane, you might prefer creating a single system account per environment. Kontfix supports this by allowing you to define a logical group that manages multiple control planes.

In the configuration below, we have a group dev_team in AU region which has test and demo as its members. A system account and token will be created for dev_team which can be used to manage configurations for both members. Each member also has its own certificates allowing their data planes to initiate connections independently.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
kontfix = {
controlPlanes = {
au = {
test = {
create_certificate = true;
upload_ca_certificate = true;
};
demo = {
create_certificate = true;
upload_ca_certificate = true;
};
};
};
groups = {
au = {
dev_team = {
members = [ "test" "demo" ];
generate_token = true;
storage_backend = [ "local" ];
};
};
};
};
}

Terraform Backend and Variables

Kontfix leverages terranix under the hood, which means you can easily define everything you need in the output. For example, terranix provides a module to configure terraform backends that you can use to specify your terraform state backend configuration. You can also add any other configurations to the output easily.

Below configuration creates variable hcloud_token and sets up the backend to store the Terraform state locally at ./terraform.tfstate.

1
2
3
4
5
6
7
8
9
10
11
{
backend.local = {
path = "./terraform.tfstate";
};

variable.hcloud_token = {
sensitive = true;
};

kontfix.controlPlanes.au.demo = {};
}

Storage backends

Kontfix supports multiple storage backends for storing certificates, system tokens, and cluster information. By default, it uses a local storage backend, which stores information in the local directory. You can also use HashiCorp Vault or AWS Secrets Manager as storage backends and even use more than one backend simultaneously.

  • For AWS backend, you must use tags. This is considered the best practice to make sure you can identify the resources.
  • For HashiCorp Vault backend, you need to provide the Vault address and authentication method in the defaults section. Currently Kontfix supports token and approle authentication methods.

Here is a sample configuration which uses all 3 storage backends.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
kontfix = {
defaults = {
storage = {
hcv = {
address = "https://example.com";
auth_method = "token";
};
};
};
controlPlanes.au = {
test = {
create_certificate = true;
upload_ca_certificate = true;
system_account = {
enable = true;
generate_token = true;
};
storage_backend = [
"local"
"hcv"
"aws"
];
aws = {
enable = true;
tags = {
owner = "liyangau";
};
};
};
};
};
}

AWS storage backend

Kontfix use AWS secret manager to store cluster information and system account token. The secret path are konnect/${region}/${cp-name}/cluster-config and konnect/${region}/${cp-name}/system-token respectively. Here is the data stored in cluster-config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"certificate": "Client certificate for secure mTLS authentication",
"private_key": "Private key that corresponds to the client certificate",
"issuing_ca": "Certificate Authority that signed the client certificate",
"cluster_url": "Public endpoint where data plane nodes connect to the Control Plane",
"telemetry_url": "Public endpoint for sending metrics, logs, and telemetry data",
"cp_id": "Unique identifier assigned by Konnect to this specific Control Plane",
"cluster_prefix": "Hostname prefix extracted from the cluster URL (e.g., 'my-cp' from 'my-cp.region.konghq.com')",
"cluster_control_plane": "DNS name for the Control Plane cluster listener to accept connection from dataplane (format: {prefix}.{region}.cp.konghq.com:443)",
"cluster_server_name": "TLS Server Name Indication (SNI) used when establishing secure connections to the Control Plane",
"cluster_telemetry_endpoint": "DNS name for the Control Plane telemetry listener to accept connection from dataplane (format: {prefix}.{region}.tp.konghq.com:443)",
"cluster_telemetry_server_name": "TLS Server Name Indication (SNI) used when establishing secure connections to the telemetry endpoint",
"private_cluster_url": "Private endpoint accessible via AWS PrivateLink for secure internal connectivity (format: {aws-region-code}.svc.konghq.com/cp/{cluster-prefix})",
"private_cluster_server_name": "TLS server name for AWS PrivateLink connections to the private Control Plane endpoint",
"private_telemetry_url": "Private telemetry endpoint accessible via AWS PrivateLink for secure internal connectivity (format: {aws-region-code}.svc.konghq.com/tp/{cluster-prefix})",
"private_cluster_telemetry_server_name": "TLS server name for AWS PrivateLink connections to the private telemetry endpoint"
}

system-token is much simpler

1
2
3
4
5
{
"token": "token",
"expires_at": "expiry timestamp",
"created_at": "created timestamp"
}

hcv storage backend

Kontfix use Hashicorp vault kv v2 to store cluster information and system account token. The secret mount is set to konnect by default and similar to aws backend, the paths are ${region}/${cp-name}/cluster-config and ${region}/${cp-name}/system-token.

The data stored in cluster-config does not include the private url since those urls are generated based on aws_region.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"certificate": "Client certificate for secure mTLS authentication",
"private_key": "Private key that corresponds to the client certificate",
"issuing_ca": "Certificate Authority that signed the client certificate",
"cluster_url": "Public endpoint where data plane nodes connect to the Control Plane",
"telemetry_url": "Public endpoint for sending metrics, logs, and telemetry data",
"cp_id": "Unique identifier assigned by Konnect to this specific Control Plane",
"cluster_prefix": "Hostname prefix extracted from the cluster URL (e.g., 'my-cp' from 'my-cp.region.konghq.com')",
"cluster_control_plane": "DNS name for the Control Plane cluster listener to accept connection from dataplane (format: {prefix}.{region}.cp.konghq.com:443)",
"cluster_server_name": "TLS Server Name Indication (SNI) used when establishing secure connections to the Control Plane",
"cluster_telemetry_endpoint": "Internal DNS name for telemetry services within Kong's infrastructure (format: {prefix}.{region}.tp.konghq.com:443)",
"cluster_telemetry_server_name": "TLS Server Name Indication (SNI) used when establishing secure connections to the telemetry endpoint"
}

Flake templates

I’ve prepared a few flake templates to help you get started quickly. To get start, simply run

1
nix flake init -t github:liyangau/kontfix

That’s all for this post, I will create another post to cover the options in Kontfix.

I hope you find this useful and give Kontfix and Nix a try.

Thanks for reading and see you next time.