consul/testing/deployer
Matt Keeler 93e29a7992 Upgrade to using hashicorp/go-metrics@v0.5.4
This also requires bumping the dependencies for:

* memberlist
* serf
* raft
* raft-boltdb/v2

Tidy up other submodules

Fixup a few ci jobs where build tag was missing.

Add tagging to the lint-enums job

Update linting rules to disallow usage of armon/go-metrics

Also prevent usage of hashicorp/go-metrics/compat as that should only be used by libraries.

Dont use the deprecated -tags option to enumcover

Try to get the tags passed through correctly.

Fix serf config cloning test due to new field present in the config.

Attempt to get build tags flowing properly when utilizing gotestsum splitting.

More quoting

One more attempt at passing build tags in GOFLAGS

Add tags to the golden file checker
2025-01-24 16:33:52 -05:00
..
sprawl Hard update all 1.3 dataplane to 1.6 (#21728) 2024-09-13 11:30:25 -05:00
topology Hard update all 1.3 dataplane to 1.6 (#21728) 2024-09-13 11:30:25 -05:00
util remove v2 tenancy, catalog, and mesh (#21592) 2024-09-05 08:50:46 -06:00
.gitignore Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) 2023-07-17 15:15:22 -07:00
README.md deployer: remove catalog/mesh v2 support (#21194) 2024-05-21 14:52:19 -05:00
TODO.md Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) 2023-07-17 15:15:22 -07:00
go.mod Upgrade to using hashicorp/go-metrics@v0.5.4 2025-01-24 16:33:52 -05:00
go.sum Upgrade to using hashicorp/go-metrics@v0.5.4 2025-01-24 16:33:52 -05:00
update-latest-versions.sh resource: add v2tenancy feature flag to deployer tests (#19774) 2023-11-30 11:41:30 -06:00

README.md

GoDoc

Summary

This is a Go library used to launch one or more Consul clusters that can be peered using the cluster peering feature. Under the covers terraform is used in conjunction with the kreuzwerker/docker provider to manage a fleet of local docker containers and networks.

Configuration

The complete topology of Consul clusters is defined using a topology.Config which allows you to define a set of networks and reference those networks when assigning nodes and workloads to clusters. Both Consul clients and consul-dataplane instances are supported.

Here is an example configuration with two peered clusters:

cfg := &topology.Config{
    Networks: []*topology.Network{
        {Name: "dc1"},
        {Name: "dc2"},
        {Name: "wan", Type: "wan"},
    },
    Clusters: []*topology.Cluster{
        {
            Name: "dc1",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc1-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc1"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client1",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc1-client2",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "ping"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "ping",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ID{Name: "pong"},
                                LocalPort: 9090,
                                Peer:      "peer-dc2-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
        {
            Name: "dc2",
            Nodes: []*topology.Node{
                {
                    Kind: topology.NodeKindServer,
                    Name: "dc2-server1",
                    Addresses: []*topology.Address{
                        {Network: "dc2"},
                        {Network: "wan"},
                    },
                },
                {
                    Kind: topology.NodeKindClient,
                    Name: "dc2-client1",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "mesh-gateway"},
                            Port:           8443,
                            EnvoyAdminPort: 19000,
                            IsMeshGateway:  true,
                        },
                    },
                },
                {
                    Kind: topology.NodeKindDataplane,
                    Name: "dc2-client2",
                    Workloads: []*topology.Workload{
                        {
                            ID:             topology.ID{Name: "pong"},
                            Image:          "rboyer/pingpong:latest",
                            Port:           8080,
                            EnvoyAdminPort: 19000,
                            Command: []string{
                                "-bind", "0.0.0.0:8080",
                                "-dial", "127.0.0.1:9090",
                                "-pong-chaos",
                                "-dialfreq", "250ms",
                                "-name", "pong",
                            },
                            Upstreams: []*topology.Upstream{{
                                ID:        topology.ID{Name: "ping"},
                                LocalPort: 9090,
                                Peer:      "peer-dc1-default",
                            }},
                        },
                    },
                },
            },
            InitialConfigEntries: []api.ConfigEntry{
                &api.ExportedServicesConfigEntry{
                    Name: "default",
                    Services: []api.ExportedService{{
                        Name: "ping",
                        Consumers: []api.ServiceConsumer{{
                            Peer: "peer-dc2-default",
                        }},
                    }},
                },
            },
        },
    },
    Peerings: []*topology.Peering{{
        Dialing: topology.PeerCluster{
            Name: "dc1",
        },
        Accepting: topology.PeerCluster{
            Name: "dc2",
        },
    }},
}

Once you have a topology configuration, you simply call the appropriate Launch function to validate and boot the cluster.

You may also modify your original configuration (in some allowed ways) and call Relaunch on an existing topology which will differentially adjust the running infrastructure. This can be useful to do things like upgrade instances in place or subly reconfigure them.

For Testing

It is meant to be consumed primarily by unit tests desiring a complex reasonably realistic Consul setup. For that use case use the sprawl/sprawltest wrapper:

func TestSomething(t *testing.T) {
    cfg := &topology.Config{...}
    sp := sprawltest.Launch(t, cfg)
    // do stuff with 'sp'
}