consul/testing/deployer
hc-github-team-consul-core 4fcb509a5f
Backport of deployer: ensure the proxy/dns/pause containers do not continually get replaced due to a change in a docker default into release/1.17.x (#21044)
backport of commit a3950321a3

Co-authored-by: R.B. Boyer <rb@hashicorp.com>
2024-05-03 15:50:42 -05:00
..
sprawl Backport of deployer: ensure the proxy/dns/pause containers do not continually get replaced due to a change in a docker default into release/1.17.x (#21044) 2024-05-03 15:50:42 -05:00
topology Backport of Integ test (test/deployer): upgrade test with service mesh into release/1.17.x (#19659) 2023-11-16 14:13:33 -05:00
util Backport of testing/deployer: update deployer to use v2 catalog constructs when requested into release/1.17.x (#19492) 2023-11-02 19:53:41 +00:00
.gitignore Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) 2023-07-17 15:15:22 -07:00
README.md Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) 2023-07-17 15:15:22 -07:00
TODO.md Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) 2023-07-17 15:15:22 -07:00
go.mod Backport of security: bump go, x/net and envoy versions into release/1.17.x (#20959) 2024-04-08 17:12:43 -04:00
go.sum Backport of security: bump go, x/net and envoy versions into release/1.17.x (#20959) 2024-04-08 17:12:43 -04:00
update-latest-versions.sh Backport of test: update deployer default images into release/1.17.x (#19556) 2023-11-07 19:34:43 +00:00

README.md

GoDoc

Summary

This is a Go library used to launch one or more Consul clusters that can be peered using the cluster peering feature. Under the covers terraform is used in conjunction with the kreuzwerker/docker provider to manage a fleet of local docker containers and networks.

Configuration

The complete topology of Consul clusters is defined using a topology.Config which allows you to define a set of networks and reference those networks when assigning nodes and services to clusters. Both Consul clients and consul-dataplane instances are supported.

Here is an example configuration with two peered clusters:

cfg := &topology.Config{
    Networks: []*topology.Network{
        {Name: "dc1"},
        {Name: "dc2"},
        {Name: "wan", Type: "wan"},
    },
    Clusters: []*topology.Cluster{
        {
            Name: "dc1",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc1-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc1"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client1",
                    Services: []*topology.Service{
                        {
                            ID:             topology.ServiceID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client2",
                    Services: []*topology.Service{
                        {
                            ID:             topology.ServiceID{Name: "ping"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "ping",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ServiceID{Name: "pong"},
                                LocalPort: 9090,
                                Peer:      "peer-dc2-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
        {
            Name: "dc2",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc2-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc2"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc2-client1",
                    Services: []*topology.Service{
                        {
                            ID:             topology.ServiceID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindDataplane,
                    Name: "dc2-client2",
                    Services: []*topology.Service{
                        {
                            ID:             topology.ServiceID{Name: "pong"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "pong",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ServiceID{Name: "ping"},
                                LocalPort: 9090,
                                Peer:      "peer-dc1-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
    },
    Peerings: []*topology.Peering{{
        Dialing: topology.PeerCluster{
            Name: "dc1",
        },
        Accepting: topology.PeerCluster{
            Name: "dc2",
        },
    }},
}

Once you have a topology configuration, you simply call the appropriate Launch function to validate and boot the cluster.

You may also modify your original configuration (in some allowed ways) and call Relaunch on an existing topology which will differentially adjust the running infrastructure. This can be useful to do things like upgrade instances in place or subly reconfigure them.

For Testing

It is meant to be consumed primarily by unit tests desiring a complex reasonably realistic Consul setup. For that use case use the sprawl/sprawltest wrapper:

func TestSomething(t *testing.T) {
    cfg := &topology.Config{...}
    sp := sprawltest.Launch(t, cfg)
    // do stuff with 'sp'
}