Michael Zalimeni
ef45ea853c
- add go.work and synchronize go.mod files - add go work sync check to CI and Makefile - remove replace statements (replaced by go.work) - fix tencentcloud ambiguous import by removing consul from test-sds-server and consul-container dependencies and upgrading go-discover - fix security scan by cloning scanner outside consul repo - fix broken otel exporter due to metrics upgrade - upgrade otel further to fix library compatibility with itself which borrows its decode package - fix consul-container lint for deprecated gRPC methods - fix test-sds-server Dockerfile - proto-gen following protoc bump to swap interface for any |
1 month ago | |
---|---|---|
.. | ||
sprawl | Hard update all 1.3 dataplane to 1.6 (#21728) | 2 months ago |
topology | Hard update all 1.3 dataplane to 1.6 (#21728) | 2 months ago |
util |
…
|
|
.gitignore |
…
|
|
README.md |
…
|
|
TODO.md |
…
|
|
go.mod | build: Remove replace and add go.work | 1 month ago |
go.sum | build: Remove replace and add go.work | 1 month ago |
update-latest-versions.sh |
…
|
README.md
Summary
This is a Go library used to launch one or more Consul clusters that can be
peered using the cluster peering feature. Under the covers terraform
is used
in conjunction with the
kreuzwerker/docker
provider to manage a fleet of local docker containers and networks.
Configuration
The complete topology of Consul clusters is defined using a topology.Config
which allows you to define a set of networks and reference those networks when
assigning nodes and workloads to clusters. Both Consul clients and
consul-dataplane
instances are supported.
Here is an example configuration with two peered clusters:
cfg := &topology.Config{
Networks: []*topology.Network{
{Name: "dc1"},
{Name: "dc2"},
{Name: "wan", Type: "wan"},
},
Clusters: []*topology.Cluster{
{
Name: "dc1",
Nodes: []*topology.Node{
{
Kind: topology.NodeKindServer,
Name: "dc1-server1",
Addresses: []*topology.Address{
{Network: "dc1"},
{Network: "wan"},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc1-client1",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "mesh-gateway"},
Port: 8443,
EnvoyAdminPort: 19000,
IsMeshGateway: true,
},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc1-client2",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "ping"},
Image: "rboyer/pingpong:latest",
Port: 8080,
EnvoyAdminPort: 19000,
Command: []string{
"-bind", "0.0.0.0:8080",
"-dial", "127.0.0.1:9090",
"-pong-chaos",
"-dialfreq", "250ms",
"-name", "ping",
},
Upstreams: []*topology.Upstream{{
ID: topology.ID{Name: "pong"},
LocalPort: 9090,
Peer: "peer-dc2-default",
}},
},
},
},
},
InitialConfigEntries: []api.ConfigEntry{
&api.ExportedServicesConfigEntry{
Name: "default",
Services: []api.ExportedService{{
Name: "ping",
Consumers: []api.ServiceConsumer{{
Peer: "peer-dc2-default",
}},
}},
},
},
},
{
Name: "dc2",
Nodes: []*topology.Node{
{
Kind: topology.NodeKindServer,
Name: "dc2-server1",
Addresses: []*topology.Address{
{Network: "dc2"},
{Network: "wan"},
},
},
{
Kind: topology.NodeKindClient,
Name: "dc2-client1",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "mesh-gateway"},
Port: 8443,
EnvoyAdminPort: 19000,
IsMeshGateway: true,
},
},
},
{
Kind: topology.NodeKindDataplane,
Name: "dc2-client2",
Workloads: []*topology.Workload{
{
ID: topology.ID{Name: "pong"},
Image: "rboyer/pingpong:latest",
Port: 8080,
EnvoyAdminPort: 19000,
Command: []string{
"-bind", "0.0.0.0:8080",
"-dial", "127.0.0.1:9090",
"-pong-chaos",
"-dialfreq", "250ms",
"-name", "pong",
},
Upstreams: []*topology.Upstream{{
ID: topology.ID{Name: "ping"},
LocalPort: 9090,
Peer: "peer-dc1-default",
}},
},
},
},
},
InitialConfigEntries: []api.ConfigEntry{
&api.ExportedServicesConfigEntry{
Name: "default",
Services: []api.ExportedService{{
Name: "ping",
Consumers: []api.ServiceConsumer{{
Peer: "peer-dc2-default",
}},
}},
},
},
},
},
Peerings: []*topology.Peering{{
Dialing: topology.PeerCluster{
Name: "dc1",
},
Accepting: topology.PeerCluster{
Name: "dc2",
},
}},
}
Once you have a topology configuration, you simply call the appropriate
Launch
function to validate and boot the cluster.
You may also modify your original configuration (in some allowed ways) and call
Relaunch
on an existing topology which will differentially adjust the running
infrastructure. This can be useful to do things like upgrade instances in place
or subly reconfigure them.
For Testing
It is meant to be consumed primarily by unit tests desiring a complex
reasonably realistic Consul setup. For that use case use the sprawl/sprawltest
wrapper:
func TestSomething(t *testing.T) {
cfg := &topology.Config{...}
sp := sprawltest.Launch(t, cfg)
// do stuff with 'sp'
}