93e29a7992
This also requires bumping the dependencies for: * memberlist * serf * raft * raft-boltdb/v2 Tidy up other submodules Fixup a few ci jobs where build tag was missing. Add tagging to the lint-enums job Update linting rules to disallow usage of armon/go-metrics Also prevent usage of hashicorp/go-metrics/compat as that should only be used by libraries. Dont use the deprecated -tags option to enumcover Try to get the tags passed through correctly. Fix serf config cloning test due to new field present in the config. Attempt to get build tags flowing properly when utilizing gotestsum splitting. More quoting One more attempt at passing build tags in GOFLAGS Add tags to the golden file checker |
||
---|---|---|
.. | ||
sprawl | ||
topology | ||
util | ||
.gitignore | ||
README.md | ||
TODO.md | ||
go.mod | ||
go.sum | ||
update-latest-versions.sh |
README.md
Summary
This is a Go library used to launch one or more Consul clusters that can be
peered using the cluster peering feature. Under the covers terraform
is used
in conjunction with the
kreuzwerker/docker
provider to manage a fleet of local docker containers and networks.
Configuration
The complete topology of Consul clusters is defined using a topology.Config
which allows you to define a set of networks and reference those networks when
assigning nodes and workloads to clusters. Both Consul clients and
consul-dataplane
instances are supported.
Here is an example configuration with two peered clusters:
cfg := &topology.Config{
Networks: []*topology.Network{
{Name: "dc1"},
{Name: "dc2"},
{Name: "wan", Type: "wan"},
},
Clusters: []*topology.Cluster{
{
Name: "dc1",
Nodes: []*topology.Node{
{
Kind: topology.NodeKindServer,
Name: "dc1-server1",
Addresses: []*topology.Address{
{Network: "dc1"},
{Network: "wan"},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc1-client1",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "mesh-gateway"},
Port: 8443,
EnvoyAdminPort: 19000,
IsMeshGateway: true,
},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc1-client2",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "ping"},
Image: "rboyer/pingpong:latest",
Port: 8080,
EnvoyAdminPort: 19000,
Command: []string{
"-bind", "0.0.0.0:8080",
"-dial", "127.0.0.1:9090",
"-pong-chaos",
"-dialfreq", "250ms",
"-name", "ping",
},
Upstreams: []*topology.Upstream{{
ID: topology.ID{Name: "pong"},
LocalPort: 9090,
Peer: "peer-dc2-default",
}},
},
},
},
},
InitialConfigEntries: []api.ConfigEntry{
&api.ExportedServicesConfigEntry{
Name: "default",
Services: []api.ExportedService{{
Name: "ping",
Consumers: []api.ServiceConsumer{{
Peer: "peer-dc2-default",
}},
}},
},
},
},
{
Name: "dc2",
Nodes: []*topology.Node{
{
Kind: topology.NodeKindServer,
Name: "dc2-server1",
Addresses: []*topology.Address{
{Network: "dc2"},
{Network: "wan"},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc2-client1",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "mesh-gateway"},
Port: 8443,
EnvoyAdminPort: 19000,
IsMeshGateway: true,
},
},
},
{
Kind: topology.NodeKindDataplane,
Name: "dc2-client2",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "pong"},
Image: "rboyer/pingpong:latest",
Port: 8080,
EnvoyAdminPort: 19000,
Command: []string{
"-bind", "0.0.0.0:8080",
"-dial", "127.0.0.1:9090",
"-pong-chaos",
"-dialfreq", "250ms",
"-name", "pong",
},
Upstreams: []*topology.Upstream{{
ID: topology.ID{Name: "ping"},
LocalPort: 9090,
Peer: "peer-dc1-default",
}},
},
},
},
},
InitialConfigEntries: []api.ConfigEntry{
&api.ExportedServicesConfigEntry{
Name: "default",
Services: []api.ExportedService{{
Name: "ping",
Consumers: []api.ServiceConsumer{{
Peer: "peer-dc2-default",
}},
}},
},
},
},
},
Peerings: []*topology.Peering{{
Dialing: topology.PeerCluster{
Name: "dc1",
},
Accepting: topology.PeerCluster{
Name: "dc2",
},
}},
}
Once you have a topology configuration, you simply call the appropriate
Launch
function to validate and boot the cluster.
You may also modify your original configuration (in some allowed ways) and call
Relaunch
on an existing topology which will differentially adjust the running
infrastructure. This can be useful to do things like upgrade instances in place
or subly reconfigure them.
For Testing
It is meant to be consumed primarily by unit tests desiring a complex
reasonably realistic Consul setup. For that use case use the sprawl/sprawltest
wrapper:
func TestSomething(t *testing.T) {
cfg := &topology.Config{...}
sp := sprawltest.Launch(t, cfg)
// do stuff with 'sp'
}