consul/testing/deployer
Michael Zalimeni 9e23fa7840
[NET-9445] chore: update submodule versions (#21263)
chore: update submodule versions

- Update submodule versions that were released
- Add missing replace directive to troubleshoot submodule
2024-06-05 13:31:43 -04:00
..
sprawl
topology
util
.gitignore
README.md
TODO.md
go.mod [NET-9445] chore: update submodule versions (#21263) 2024-06-05 13:31:43 -04:00
go.sum
update-latest-versions.sh

README.md

GoDoc

Summary

This is a Go library used to launch one or more Consul clusters that can be peered using the cluster peering feature. Under the covers terraform is used in conjunction with the kreuzwerker/docker provider to manage a fleet of local docker containers and networks.

Configuration

The complete topology of Consul clusters is defined using a topology.Config which allows you to define a set of networks and reference those networks when assigning nodes and workloads to clusters. Both Consul clients and consul-dataplane instances are supported.

Here is an example configuration with two peered clusters:

cfg := &topology.Config{
    Networks: []*topology.Network{
        {Name: "dc1"},
        {Name: "dc2"},
        {Name: "wan", Type: "wan"},
    },
    Clusters: []*topology.Cluster{
        {
            Name: "dc1",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc1-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc1"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client1",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client2",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "ping"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "ping",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ID{Name: "pong"},
                                LocalPort: 9090,
                                Peer:      "peer-dc2-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
        {
            Name: "dc2",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc2-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc2"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc2-client1",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindDataplane,
                    Name: "dc2-client2",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "pong"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "pong",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ID{Name: "ping"},
                                LocalPort: 9090,
                                Peer:      "peer-dc1-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
    },
    Peerings: []*topology.Peering{{
        Dialing: topology.PeerCluster{
            Name: "dc1",
        },
        Accepting: topology.PeerCluster{
            Name: "dc2",
        },
    }},
}

Once you have a topology configuration, you simply call the appropriate Launch function to validate and boot the cluster.

You may also modify your original configuration (in some allowed ways) and call Relaunch on an existing topology which will differentially adjust the running infrastructure. This can be useful to do things like upgrade instances in place or subly reconfigure them.

For Testing

It is meant to be consumed primarily by unit tests desiring a complex reasonably realistic Consul setup. For that use case use the sprawl/sprawltest wrapper:

func TestSomething(t *testing.T) {
    cfg := &topology.Config{...}
    sp := sprawltest.Launch(t, cfg)
    // do stuff with 'sp'
}