Browse Source

Envoy Integration Test Windows (#18007)

* [CONSUL-395] Update check_hostport and Usage (#40)

* [CONSUL-397] Copy envoy binary from Image (#41)

* [CONSUL-382] Support openssl in unique test dockerfile (#43)

* [CONSUL-405] Add bats to single container (#44)

* [CONSUL-414] Run Prometheus Test Cases and Validate Changes (#46)

* [CONSUL-410] Run Jaeger in Single container (#45)

* [CONSUL-412] Run test-sds-server in single container (#48)

* [CONSUL-408] Clean containers (#47)

* [CONSUL-384] Rebase and sync fork (#50)

* [CONSUL-415] Create Scenarios Troubleshooting Docs (#49)

* [CONSUL-417] Update Docs Single Container (#51)

* [CONSUL-428] Add Socat to single container (#54)

* [CONSUL-424] Replace pkill in kill_envoy function (#52)

* [CONSUL-434] Modify Docker run functions in Helper script (#53)

* [CONSUL-435] Replace docker run in set_ttl_check_state & wait_for_agent_service_register functions (#55)

* [CONSUL-438] Add netcat (nc) in the Single container Dockerfile (#56)

* [CONSUL-429] Replace Docker run with Docker exec (#57)

* [CONSUL-436] Curl timeout and run tests (#58)

* [CONSUL-443] Create dogstatsd Function (#59)

* [CONSUL-431] Update Docs Netcat (#60)

* [CONSUL-439] Parse nc Command in function (#61)

* [CONSUL-463] Review curl Exec and get_ca_root Func (#63)

* [CONSUL-453] Docker hostname in Helper functions (#64)

* [CONSUL-461] Test wipe volumes without extra cont (#66)

* [CONSUL-454] Check ports in the Server and Agent containers (#65)

* [CONSUL-441] Update windows dockerfile with version (#62)

* [CONSUL-466] Review case-grpc Failing Test (#67)

* [CONSUL-494] Review case-cfg-resolver-svc-failover (#68)

* [CONSUL-496] Replace docker_wget & docker_curl (#69)

* [CONSUL-499] Cleanup Scripts - Remove nanoserver (#70)

* [CONSUL-500] Update Troubleshooting Docs (#72)

* [CONSUL-502] Pull & Tag Envoy Windows Image (#73)

* [CONSUL-504] Replace docker run in docker_consul (#76)

* [CONSUL-505] Change admin_bind

* [CONSUL-399] Update envoy to 1.23.1 (#78)

* [CONSUL-510] Support case-wanfed-gw on Windows (#79)

* [CONSUL-506] Update troubleshooting Documentation (#80)

* [CONSUL-512] Review debug_dump_volumes Function (#81)

* [CONSUL-514] Add zipkin to Docker Image (#82)

* [CONSUL-515] Update Documentation (#83)

* [CONSUL-529] Support case-consul-exec (#86)

* [CONSUL-530] Update Documentation (#87)

* [CONSUL-530] Update default consul version 1.13.3

* [CONSUL-539] Cleanup (#91)

* [CONSUL-546] Scripts Clean-up (#92)

* [CONSUL-491] Support admin_access_log_path value for Windows (#71)

* [CONSUL-519] Implement mkfifo Alternative (#84)

* [CONSUL-542] Create OS Specific Files for Envoy Package (#88)

* [CONSUL-543] Create exec_supported.go (#89)

* [CONSUL-544] Test and Build Changes (#90)

* Implement os.DevNull

* using mmap instead of disk files

* fix import in exec-unix

* fix nmap open too many arguemtn

* go fmt on file

* changelog file

* fix go mod

* Update .changelog/17694.txt

Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>

* different mmap library

* fix bootstrap json

* some fixes

* chocolatey version fix and image fix

* using different library

* fix Map funciton call

* fix mmap call

* fix tcp dump

* fix tcp dump

* windows tcp dump

* Fix docker run

* fix tests

* fix go mod

* fix version 16.0

* fix version

* fix version dev

* sleep to debug

* fix sleep

* fix permission issue

* fix permission issue

* fix permission issue

* fix command

* fix command

* fix funciton

* fix assert config entry status command not found

* fix command not found assert_cert_has_cn

* fix command not found assert_upstream_missing

* fix command not found assert_upstream_missing_once

* fix command not found get_upstream_endpoint

* fix command not found get_envoy_public_listener_once

* fix command not found

* fix test cases

* windows integration test workflow github

* made code similar to unix using npipe

* fix go.mod

* fix dialing of npipe

* dont wait

* check size of written json

* fix undefined n

* running

* fix dep

* fix syntax error

* fix workflow file

* windows runner

* fix runner

* fix from json

* fix runs on

* merge connect envoy

* fix cin path

* build

* fix file name

* fix file name

* fix dev build

* remove unwanted code

* fix upload

* fix bin name

* fix path

* checkout current branch

* fix path

* fix tests

* fix shell bash for windows sh files

* fix permission of run-test.sh

* removed docker dev

* added shell bash for tests

* fix tag

* fix win=true

* fix cd

* added dev

* fix variable undefined

* removed failing tests

* fix tcp dump image

* fix curl

* fix curl

* tcp dump path

* fix tcpdump path

* fix curl

* fix curl install

* stop removing intermediate containers

* fix tcpdump docker image

* revert -rm

* --rm=false

* makeing docker image before

* fix tcpdump

* removed case consul exec

* removed terminating gateway simple

* comment case wasm

* removed data dog

* comment out upload coverage

* uncomment case-consul-exec

* comment case consul exec

* if always

* logs

* using consul 1.17.0

* fix quotes

* revert quotes

* redirect to dev null

* Revert version

* revert consul connect

* fix version

* removed envoy connect

* not using function

* change log

* docker logs

* fix logs

* restructure bad authz

* rmeoved dev null

* output

* fix file descriptor

* fix cacert

* fix cacert

* fix ca cert

* cacert does not work in windows curl

* fix func

* removed docker logs

* added sleep

* fix tls

* commented case-consul-exec

* removed echo

* retry docker consul

* fix upload bin

* uncomment consul exec

* copying consul.exe to docker image

* copy fix

* fix paths

* fix path

* github workspace path

* latest version

* Revert "latest version"

This reverts commit 5a7d7b82d9.

* commented consul exec

* added ssl revoke best effort

* revert best effort

* removed unused files

* rename var name and change dir

* windows runner

* permission

* needs setup fix

* swtich to github runner

* fix file path

* fix path

* fix path

* fix path

* fix path

* fix path

* fix build paths

* fix tag

* nightly runs

* added matrix in github workflow, renamed files

* fix job

* fix matrix

* removed brackes

* from json

* without using job matrix

* fix quotes

* revert job matrix

* fix workflow

* fix comment

* added comment

* nightly runs

* removed datadog ci as it is already measured in linux one

* running test

* Revert "running test"

This reverts commit 7013d15a23.

* pr comment fixes

* running test now

* running subset of test

* running subset of test

* job matrix

* shell bash

* removed bash shell

* linux machine for job matrix

* fix output

* added cat to debug

* using ubuntu latest

* fix job matrix

* fix win true

* fix go test

* revert job matrix

---------

Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com>
Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com>
Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com>
Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com>
Co-authored-by: joselo85 <joseignaciolorenzo85@gmail.com>
Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com>
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
pull/18215/head^2
Ashesh Vidyut 1 year ago committed by GitHub
parent
commit
47d445d680
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 3
      .changelog/18007.txt
  2. 26
      .github/scripts/get_runner_classes_windows.sh
  3. 47
      .github/workflows/reusable-dev-build-windows.yml
  4. 1210
      .github/workflows/test-integrations-windows.yml
  5. 82
      .release/docker/docker-entrypoint-windows.sh
  6. 51
      Dockerfile-windows
  7. 4
      build-support/windows/Dockerfile-consul-dev-windows
  8. 52
      build-support/windows/Dockerfile-consul-local-windows
  9. 12
      build-support/windows/Dockerfile-openzipkin-windows
  10. 14
      build-support/windows/build-consul-dev-image.sh
  11. 92
      build-support/windows/build-consul-local-images.sh
  12. 5
      build-support/windows/build-test-sds-server-image.sh
  13. 119
      build-support/windows/windows-test.md
  14. 12
      test/integration/connect/envoy/Dockerfile-consul-envoy-windows
  15. 7
      test/integration/connect/envoy/Dockerfile-tcpdump-windows
  16. 8
      test/integration/connect/envoy/Dockerfile-test-sds-server-windows
  17. 40
      test/integration/connect/envoy/WINDOWS-TEST.md
  18. 7
      test/integration/connect/envoy/case-dogstatsd-udp/verify.bats
  19. 2
      test/integration/connect/envoy/case-gateways-local/secondary/setup.sh
  20. 2
      test/integration/connect/envoy/case-grpc/service_s1.hcl
  21. 2
      test/integration/connect/envoy/case-grpc/verify.bats
  22. 4
      test/integration/connect/envoy/case-http-badauthz/setup.sh
  23. 44
      test/integration/connect/envoy/case-wanfed-gw/global-setup-windows.sh
  24. 7
      test/integration/connect/envoy/case-zipkin/verify.bats
  25. 42
      test/integration/connect/envoy/docker-windows.md
  26. BIN
      test/integration/connect/envoy/docs/img/linux-arch.png
  27. BIN
      test/integration/connect/envoy/docs/img/windows-arch-singlecontainer.png
  28. BIN
      test/integration/connect/envoy/docs/img/windows-linux-arch.png
  29. 106
      test/integration/connect/envoy/docs/windows-testing-architecture.md
  30. 1192
      test/integration/connect/envoy/helpers.windows.bash
  31. 100
      test/integration/connect/envoy/main_test.go
  32. 908
      test/integration/connect/envoy/run-tests.windows.sh
  33. 90
      test/integration/connect/envoy/windows-troubleshooting.md

3
.changelog/18007.txt

@ -0,0 +1,3 @@
```release-note:improvement
Windows: Integration tests for Consul Windows VMs
```

26
.github/scripts/get_runner_classes_windows.sh

@ -0,0 +1,26 @@
#!/usr/bin/env bash
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
#
# This script generates tag-sets that can be used as runs-on: values to select runners.
set -euo pipefail
case "$GITHUB_REPOSITORY" in
*-enterprise)
# shellcheck disable=SC2129
echo "compute-small=['self-hosted', 'windows', 'small']" >> "$GITHUB_OUTPUT"
echo "compute-medium=['self-hosted', 'windows', 'medium']" >> "$GITHUB_OUTPUT"
echo "compute-large=['self-hosted', 'windows', 'large']" >> "$GITHUB_OUTPUT"
# m5d.8xlarge is equivalent to our xl custom runner in OSS
echo "compute-xl=['self-hosted', 'ondemand', 'windows', 'type=m5d.8xlarge']" >> "$GITHUB_OUTPUT"
;;
*)
# shellcheck disable=SC2129
echo "compute-small=['windows-2019']" >> "$GITHUB_OUTPUT"
echo "compute-medium=['windows-2019']" >> "$GITHUB_OUTPUT"
echo "compute-large=['windows-2019']" >> "$GITHUB_OUTPUT"
echo "compute-xl=['windows-2019']" >> "$GITHUB_OUTPUT"
;;
esac

47
.github/workflows/reusable-dev-build-windows.yml

@ -0,0 +1,47 @@
name: reusable-dev-build-windows
on:
workflow_call:
inputs:
uploaded-binary-name:
required: false
type: string
default: "consul.exe"
runs-on:
description: An expression indicating which kind of runners to use.
required: true
type: string
repository-name:
required: true
type: string
go-arch:
required: false
type: string
default: ""
secrets:
elevated-github-token:
required: true
jobs:
build:
runs-on: 'windows-2019'
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # v3.5.2
# NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos.
- name: Setup Git
if: ${{ endsWith(inputs.repository-name, '-enterprise') }}
run: git config --global url."https://${{ secrets.elevated-github-token }}:@github.com".insteadOf "https://github.com"
- uses: actions/setup-go@fac708d6674e30b6ba41289acaab6d4b75aa0753 # v4.0.1
with:
go-version-file: 'go.mod'
- name: Build
env:
GOARCH: ${{ inputs.goarch }}
run: go build .
# save dev build to pass to downstream jobs
- uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v3.1.2
with:
name: ${{inputs.uploaded-binary-name}}
path: consul.exe
- name: Notify Slack
if: ${{ failure() }}
run: .github/scripts/notify_slack.sh

1210
.github/workflows/test-integrations-windows.yml

File diff suppressed because it is too large Load Diff

82
.release/docker/docker-entrypoint-windows.sh

@ -0,0 +1,82 @@
#!/usr/bin/dumb-init /bin/sh
set -e
# Note above that we run dumb-init as PID 1 in order to reap zombie processes
# as well as forward signals to all processes in its session. Normally, sh
# wouldn't do either of these functions so we'd leak zombies as well as do
# unclean termination of all our sub-processes.
# As of docker 1.13, using docker run --init achieves the same outcome.
# You can set CONSUL_BIND_INTERFACE to the name of the interface you'd like to
# bind to and this will look up the IP and pass the proper -bind= option along
# to Consul.
CONSUL_BIND=
if [ -n "$CONSUL_BIND_INTERFACE" ]; then
CONSUL_BIND_ADDRESS=$(ip -o -4 addr list $CONSUL_BIND_INTERFACE | head -n1 | awk '{print $4}' | cut -d/ -f1)
if [ -z "$CONSUL_BIND_ADDRESS" ]; then
echo "Could not find IP for interface '$CONSUL_BIND_INTERFACE', exiting"
exit 1
fi
CONSUL_BIND="-bind=$CONSUL_BIND_ADDRESS"
echo "==> Found address '$CONSUL_BIND_ADDRESS' for interface '$CONSUL_BIND_INTERFACE', setting bind option..."
fi
# You can set CONSUL_CLIENT_INTERFACE to the name of the interface you'd like to
# bind client intefaces (HTTP, DNS, and RPC) to and this will look up the IP and
# pass the proper -client= option along to Consul.
CONSUL_CLIENT=
if [ -n "$CONSUL_CLIENT_INTERFACE" ]; then
CONSUL_CLIENT_ADDRESS=$(ip -o -4 addr list $CONSUL_CLIENT_INTERFACE | head -n1 | awk '{print $4}' | cut -d/ -f1)
if [ -z "$CONSUL_CLIENT_ADDRESS" ]; then
echo "Could not find IP for interface '$CONSUL_CLIENT_INTERFACE', exiting"
exit 1
fi
CONSUL_CLIENT="-client=$CONSUL_CLIENT_ADDRESS"
echo "==> Found address '$CONSUL_CLIENT_ADDRESS' for interface '$CONSUL_CLIENT_INTERFACE', setting client option..."
fi
# CONSUL_DATA_DIR is exposed as a volume for possible persistent storage. The
# CONSUL_CONFIG_DIR isn't exposed as a volume but you can compose additional
# config files in there if you use this image as a base, or use CONSUL_LOCAL_CONFIG
# below.
CONSUL_DATA_DIR=C:\\consul\\data
CONSUL_CONFIG_DIR=C:\\consul\\config
# You can also set the CONSUL_LOCAL_CONFIG environemnt variable to pass some
# Consul configuration JSON without having to bind any volumes.
if [ -n "$CONSUL_LOCAL_CONFIG" ]; then
echo "$CONSUL_LOCAL_CONFIG" > "$CONSUL_CONFIG_DIR/local.json"
fi
# If the user is trying to run Consul directly with some arguments, then
# pass them to Consul.
if [ "${1:0:1}" = '-' ]; then
set -- consul "$@"
fi
# Look for Consul subcommands.
if [ "$1" = 'agent' ]; then
shift
set -- consul agent \
-data-dir="$CONSUL_DATA_DIR" \
-config-dir="$CONSUL_CONFIG_DIR" \
$CONSUL_BIND \
$CONSUL_CLIENT \
"$@"
elif [ "$1" = 'version' ]; then
# This needs a special case because there's no help output.
set -- consul "$@"
elif consul --help "$1" 2>&1 | grep -q "consul $1"; then
# We can't use the return code to check for the existence of a subcommand, so
# we have to use grep to look for a pattern in the help output.
set -- consul "$@"
fi
# NOTE: Unlike in the regular Consul Docker image, we don't have code here
# for changing data-dir directory ownership or using su-exec because OpenShift
# won't run this container as root and so we can't change data dir ownership,
# and there's no need to use su-exec.
exec "$@"

51
Dockerfile-windows

@ -0,0 +1,51 @@
FROM mcr.microsoft.com/windows/servercore:ltsc2019
ARG VERSION=1.16.0
ENV chocolateyVersion=1.4.0
LABEL org.opencontainers.image.authors="Consul Team <consul@hashicorp.com>" \
org.opencontainers.image.url="https://www.consul.io/" \
org.opencontainers.image.documentation="https://www.consul.io/docs" \
org.opencontainers.image.source="https://github.com/hashicorp/consul" \
org.opencontainers.image.version=$VERSION \
org.opencontainers.image.vendor="HashiCorp" \
org.opencontainers.image.title="consul" \
org.opencontainers.image.description="Consul is a datacenter runtime that provides service discovery, configuration, and orchestration." \
version=${VERSION}
RUN ["powershell", "Set-ExecutionPolicy", "Bypass", "-Scope", "Process", "-Force;"]
RUN ["powershell", "iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
RUN choco install git.install -yf
RUN SETX /M path "%PATH%;C:\Program Files\Git\bin"
RUN mkdir C:\\consul
RUN mkdir C:\\consul\\data
RUN mkdir C:\\consul\\config
# Server RPC is used for communication between Consul clients and servers for internal
# request forwarding.
EXPOSE 8300
# Serf LAN and WAN (WAN is used only by Consul servers) are used for gossip between
# Consul agents. LAN is within the datacenter and WAN is between just the Consul
# servers in all datacenters.
EXPOSE 8301 8301/udp 8302 8302/udp
# HTTP and DNS (both TCP and UDP) are the primary interfaces that applications
# use to interact with Consul.
EXPOSE 8500 8600 8600/udp
#ENV CONSUL_URL=https://releases.hashicorp.com/consul/${VERSION}/consul_${VERSION}_windows_amd64.zip
#RUN curl %CONSUL_URL% -L -o consul.zip
#RUN tar -xf consul.zip -C consul
COPY consul.exe C:\\consul
COPY .release/docker/docker-entrypoint-windows.sh C:\\docker-entrypoint-windows.sh
ENTRYPOINT ["bash.exe", "docker-entrypoint-windows.sh"]
# By default you'll get an insecure single-node development server that stores
# everything in RAM, exposes a web UI and HTTP endpoints, and bootstraps itself.
# Don't use this configuration for production.
CMD ["agent", "-dev", "-client", "0.0.0.0"]

4
build-support/windows/Dockerfile-consul-dev-windows

@ -0,0 +1,4 @@
ARG VERSION=1.16.0
FROM windows/consul:${VERSION}-local
COPY dist/ C:\\consul

52
build-support/windows/Dockerfile-consul-local-windows

@ -0,0 +1,52 @@
ARG VERSION=1.13.3
FROM windows/test-sds-server as test-sds-server
FROM docker.mirror.hashicorp.services/windows/openzipkin as openzipkin
FROM windows/consul:${VERSION}
# Fortio binary downloaded
RUN mkdir fortio
ENV FORTIO_URL=https://github.com/fortio/fortio/releases/download/v1.33.0/fortio_win_1.33.0.zip
RUN curl %FORTIO_URL% -L -o fortio.zip
RUN tar -xf fortio.zip -C fortio
RUN choco install openssl -yf
RUN choco install jq -yf
RUN choco install netcat -yf
RUN choco install openjdk -yf
# Install Bats
ENV BATS_URL=https://github.com/bats-core/bats-core/archive/refs/tags/v1.7.0.zip
RUN curl %BATS_URL% -L -o bats.zip
RUN mkdir bats-core
RUN tar -xf bats.zip -C bats-core --strip-components=1
RUN cd "C:\\Program Files\\Git\\bin" && bash.exe -c "/c/bats-core/install.sh /c/bats"
# Install Jaeger
ENV JAEGER_URL=https://github.com/jaegertracing/jaeger/releases/download/v1.11.0/jaeger-1.11.0-windows-amd64.tar.gz
RUN curl %JAEGER_URL% -L -o jaeger.tar.gz
RUN mkdir jaeger
RUN tar -xf jaeger.tar.gz -C jaeger --strip-components=1
# Install Socat
ENV SOCAT_URL=https://github.com/tech128/socat-1.7.3.0-windows/archive/refs/heads/master.zip
RUN curl %SOCAT_URL% -L -o socat.zip
RUN mkdir socat
RUN tar -xf socat.zip -C socat --strip-components=1
# Copy test-sds-server binary and certs
COPY --from=test-sds-server ["C:/go/src/", "C:/test-sds-server/"]
# Copy openzipkin .jar file
COPY --from=openzipkin ["C:/zipkin", "C:/zipkin"]
EXPOSE 8300
EXPOSE 8301 8301/udp 8302 8302/udp
EXPOSE 8500 8600 8600/udp
EXPOSE 8502
EXPOSE 19000 19001 19002 19003 19004
EXPOSE 21000 21001 21002 21003 21004
EXPOSE 5000 1234 2345
RUN SETX /M path "%PATH%;C:\consul;C:\fortio;C:\jaeger;C:\Program Files\Git\bin;C:\Program Files\Git\usr\bin;C:\Program Files\OpenSSL-Win64\bin;C:\bats\bin\;C:\ProgramData\chocolatey\lib\jq\tools;C:\socat"

12
build-support/windows/Dockerfile-openzipkin-windows

@ -0,0 +1,12 @@
FROM docker.mirror.hashicorp.services/windows/openjdk:1809
RUN mkdir zipkin
RUN curl.exe -sSL 'https://search.maven.org/remote_content?g=io.zipkin&a=zipkin-server&v=LATEST&c=exec' -o zipkin/zipkin.jar
EXPOSE 9410/tcp
EXPOSE 9411/tcp
WORKDIR /zipkin
ENTRYPOINT ["java", "-jar", "zipkin.jar"]

14
build-support/windows/build-consul-dev-image.sh

@ -0,0 +1,14 @@
#!/usr/bin/env bash
cd ../../
rm -rf dist
export GOOS=windows GOARCH=amd64
VERSION=1.16.0
CONSUL_BUILDDATE=$(date +"%Y-%m-%dT%H:%M:%SZ")
GIT_IMPORT=github.com/hashicorp/consul/version
GOLDFLAGS=" -X $GIT_IMPORT.Version=$VERSION -X $GIT_IMPORT.VersionPrerelease=dev -X $GIT_IMPORT.BuildDate=$CONSUL_BUILDDATE "
go build -ldflags "$GOLDFLAGS" -o ./dist/ .
docker build -t windows/consul:${VERSION}-dev -f build-support/windows/Dockerfile-consul-dev-windows . --build-arg VERSION=${VERSION}

92
build-support/windows/build-consul-local-images.sh

@ -0,0 +1,92 @@
#!/usr/bin/env bash
readonly HASHICORP_DOCKER_PROXY="docker.mirror.hashicorp.services"
# Build Consul Version 1.13.3 / 1.12.6 / 1.11.11
VERSION=${VERSION:-"1.16.0"}
export VERSION
# Build Windows Envoy Version 1.23.1 / 1.21.5 / 1.20.7
ENVOY_VERSION=${ENVOY_VERSION:-"1.23.1"}
export ENVOY_VERSION
echo "Building Images"
# Pull Windows Servercore image
echo " "
echo "Pull Windows Servercore image"
docker pull mcr.microsoft.com/windows/servercore:1809
# Tag Windows Servercore image
echo " "
echo "Tag Windows Servercore image"
docker tag mcr.microsoft.com/windows/servercore:1809 "${HASHICORP_DOCKER_PROXY}/windows/servercore:1809"
# Pull Windows Nanoserver image
echo " "
echo "Pull Windows Nanoserver image"
docker pull mcr.microsoft.com/windows/nanoserver:1809
# Tag Windows Nanoserver image
echo " "
echo "Tag Windows Nanoserver image"
docker tag mcr.microsoft.com/windows/nanoserver:1809 "${HASHICORP_DOCKER_PROXY}/windows/nanoserver:1809"
# Pull Windows OpenJDK image
echo " "
echo "Pull Windows OpenJDK image"
docker pull openjdk:windowsservercore-1809
# Tag Windows OpenJDK image
echo " "
echo "Tag Windows OpenJDK image"
docker tag openjdk:windowsservercore-1809 "${HASHICORP_DOCKER_PROXY}/windows/openjdk:1809"
# Pull Windows Golang image
echo " "
echo "Pull Windows Golang image"
docker pull golang:1.18.1-nanoserver-1809
# Tag Windows Golang image
echo " "
echo "Tag Windows Golang image"
docker tag golang:1.18.1-nanoserver-1809 "${HASHICORP_DOCKER_PROXY}/windows/golang:1809"
# Pull Kubernetes/pause image
echo " "
echo "Pull Kubernetes/pause image"
docker pull mcr.microsoft.com/oss/kubernetes/pause:3.6
# Tag Kubernetes/pause image
echo " "
echo "Tag Kubernetes/pause image"
docker tag mcr.microsoft.com/oss/kubernetes/pause:3.6 "${HASHICORP_DOCKER_PROXY}/windows/kubernetes/pause"
# Pull envoy-windows image
echo " "
echo "Pull envoyproxy/envoy-windows image"
docker pull envoyproxy/envoy-windows:v${ENVOY_VERSION}
# Tag envoy-windows image
echo " "
echo "Tag envoyproxy/envoy-windows image"
docker tag envoyproxy/envoy-windows:v${ENVOY_VERSION} "${HASHICORP_DOCKER_PROXY}/windows/envoy-windows:v${ENVOY_VERSION}"
# Build Windows Openzipkin Image
docker build -t "${HASHICORP_DOCKER_PROXY}/windows/openzipkin" -f Dockerfile-openzipkin-windows .
# Build Windows Test sds server Image
./build-test-sds-server-image.sh
# Build windows/consul:${VERSION} Image
echo " "
echo "Build windows/consul:${VERSION} Image"
docker build -t "windows/consul:${VERSION}" -f ../../Dockerfile-windows ../../ --build-arg VERSION=${VERSION}
# Build windows/consul:${VERSION}-local Image
echo " "
echo "Build windows/consul:${VERSION}-local Image"
docker build -t windows/consul:${VERSION}-local -f ./Dockerfile-consul-local-windows . --build-arg VERSION=${VERSION}
echo "Building Complete!"

5
build-support/windows/build-test-sds-server-image.sh

@ -0,0 +1,5 @@
#!/usr/bin/env bash
cd ../../test/integration/connect/envoy
docker build -t windows/test-sds-server -f ./Dockerfile-test-sds-server-windows test-sds-server

119
build-support/windows/windows-test.md

@ -0,0 +1,119 @@
# Dockerfiles for Windows Integration Tests
## Index
- [About](#about-this-file)
- [Consul Windows](#consul-windows)
- [Consul Windows Local](#consul-windows-local)
- [Consul Windows Dev](#consul-windows-dev)
- [Dockerfile-openzipkin-windows](#dockerfile-openzipkin-windows)
## About this File
In this file you will find which Docker images that need to be pre-built to run the Envoy integration tests on Windows, as well as information on how to run each of these files individually for testing purposes.
## Consul Windows
The Windows/Consul:_{VERSION}_ image is built from the "Dockerfile-windows" file located at the root of the project.
To do this, the official [windows/servercore image](https://hub.docker.com/_/microsoft-windows-servercore) is used as base image.
To build the image, use the following command:
```shell
docker build -t windows/consul -f Dockerfile-windows . --build-arg VERSION=${VERSION}
```
You can test the built file by running the following command:
```shell
docker run --rm -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8500:8500 -p 8600:8600 --name consul --hostname "consul-primary-server" --network-alias "consul-primary-server" windows/consul agent -dev -datacenter "primary" -grpc-port -1 -client "0.0.0.0" -bind "0.0.0.0"
```
If everything works properly you should openning the browser and check the Consul UI running on: `http://localhost:8500`
## Consul Windows Local
The Windows/Consul:_{VERSION}_-local custom image deployed in the "Dockerfile-consul-local-windows" DockerFile is built from the selected by the shell script _build-consul-local-images.sh_.
When executing it, all the tools required to run the Windows Connect Envoy Integration Tests will be added to the image.
It is necessary that the _"windows/consul"_ image has been built first. This script also takes care of that.
To build this image you need to run the following command on your terminal:
```shell
./build-consul-local-images.sh
```
> [!NOTE]
> Shell script execution may vary depending on your terminal, we recommend using **Git Bash** for Windows.
```shell
docker run --rm -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8500:8500 -p 8600:8600 --name consul-local --hostname "consul-primary-server" --network-alias "consul-primary-server" windows/consul:_{VERSION}_-local agent -dev -datacenter "primary" -grpc-port -1 -client "0.0.0.0" -bind "0.0.0.0"
```
If everything works properly you can use your browser and check the Consul UI running on: `http://localhost:8500`
## Consul Windows Dev
The Windows/Consul:_{VERSION}_-dev custom image deployed in the "Dockerfile-consul-dev-windows" DockerFile is generated by the shell script named _build-consul-dev-image.sh_.
When executing it, the compilation of Consul is carried out and it is saved in the _"dist"_ directory, this file is then copied to the _"windows/consul:_{VERSION}_-dev"_ image.
It is necessary that the _"windows/consul_{VERSION}_-local"_ image has been built first.
To build this image you need to run the following command on your terminal:
```shell
./build-consul-dev-image.sh
```
> [!NOTE]
> Shell script execution may vary depending on your terminal, we recommend using **Git Bash** for Windows.
You can test the built file by running the following command:
```shell
docker run --rm -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8500:8500 -p 8600:8600 --name consul-local --hostname "consul-primary-server" --network-alias "consul-primary-server" windows/consul:_{VERSION}_-dev agent -dev -datacenter "primary" -grpc-port -1 -client "0.0.0.0" -bind "0.0.0.0"
```
If everything works properly you can use your browser and check the Consul UI running on: `http://localhost:8500`
## Dockerfile-openzipkin-windows
Due to the unavailability of an official Openzipkin Docker image for Windows, the [openjdk Windows image](https://hub.docker.com/layers/openjdk/library/openjdk/jdk-windowsservercore-1809/images/sha256-b0cc238d2ec5fb58109a0006ff9e1bcaf66a5301f49bcb8dece9599ac5be6331) was used, where the latest self-contained executable Openzipkin .jar file is downloaded.
To build this image you need to run the following command on your terminal:
```shell
docker build -t openzipkin -f Dockerfile-openzipkin-windows .
```
You can test the built file by running the following command:
```shell
docker run --rm --name openzipkin
```
If everything works as it should, you will see the zipkin logo being displayed, along with the current version and port configuration:
```shell
:: version 2.23.18 :: commit 4b71677 ::
20XX-XX-XX XX:XX:XX.XXX INFO [/] 1252 --- [oss-http-*:9411] c.l.a.s.Server : Serving HTTP at /[0:0:0:0:0:0:0:0]:9411 - http://127.0.0.1:9411/
```
# Testing
During development, it may be more convenient to check your work-in-progress by running only the tests which you expect to be affected by your changes, as the full test suite can take several minutes to execute. [Go's built-in test tool](https://golang.org/pkg/cmd/go/internal/test/) allows specifying a list of packages to test and the `-run` option to only include test names matching a regular expression.
The `go test -short` flag can also be used to skip slower tests.
Examples (run from the repository root):
- `go test -v ./connect` will run all tests in the connect package (see `./connect` folder)
- `go test -v -run TestRetryJoin ./command/agent` will run all tests in the agent package (see `./command/agent` folder) with name substring `TestRetryJoin`
When a pull request is opened CI will run all tests and lint to verify the change.
If you want to run the tests on Windows images you must attach the win=true flag.
Example:
```shell
go test -v -timeout=30m -tags integration ./test/integration/connect/envoy -run="TestEnvoy/case-badauthz" -win=true
```

12
test/integration/connect/envoy/Dockerfile-consul-envoy-windows

@ -0,0 +1,12 @@
# From Consul Version 1.13.3 / 1.12.6 / 1.11.11
ARG VERSION=1.16.0-dev
# From Envoy version 1.23.1 / 1.21.5 / 1.20.7
ARG ENVOY_VERSION
FROM docker.mirror.hashicorp.services/windows/envoy-windows:v${ENVOY_VERSION} as envoy
FROM windows/consul:${VERSION}
# Copy envoy.exe from FROM windows/envoy-windows:${ENVOY_VERSION}
COPY --from=envoy ["C:/Program Files/envoy/", "C:/envoy/"]
RUN SETX /M path "%PATH%;C:\envoy;"

7
test/integration/connect/envoy/Dockerfile-tcpdump-windows

@ -0,0 +1,7 @@
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY ["tcpdump.exe", "C:/Program Files/"]
ENTRYPOINT ["C:/Program Files/tcpdump.exe"]
# docker.exe build -t envoy-tcpdump -f Dockerfile-tcpdump-windows .

8
test/integration/connect/envoy/Dockerfile-test-sds-server-windows

@ -0,0 +1,8 @@
FROM docker.mirror.hashicorp.services/windows/golang:1809
WORKDIR /go/src
COPY ./ .
RUN go build -v -o test-sds-server.exe sds.go
CMD ["test-sds-server.exe"]

40
test/integration/connect/envoy/WINDOWS-TEST.md

@ -0,0 +1,40 @@
# Envoy Integration Tests on Windows
## Index
- [About](#about)
- [Pre-built core images](#pre-built-core-images)
- [Test images](#integration-test-images)
- [Run Tests](#run-tests)
## About
This file is the entrypoint to understand how to execute Envoy integration tests on Windows as well as to understand the differences between Linux tests and Windows tests. Below you can find a list of relevant documentation that has been written while working on supporting the Envoy integration tests on Windows.
- [Windows Testing Architecture](test/integration/connect/envoy/docs/windows-testing-architecture.md): On this file you will find why the testing architecture on Windows differs from Linux's.
- [Build Images](build-support-windows/BUILD-IMAGES.md): Here you will find how to build the images required for executing the tests.
- [Windows Troubleshooting](test/integration/connect/envoy/WindowsTroubleshooting.md): This file lists, among other things everything we needed to change/adapt for the existing tests to run in Windows containers.
## Pre-built core images
Before running the integration tests, you must pre-build the core images that the tests require to be ran on the Windows environment. Make sure to check out the `BUILD-IMAGES` file [here](build-support-windows/BUILD-IMAGES.md) for this purpose.
## Integration test images
During the execution of the integration tests, several images are built based-on the pre-built core images. To get more information about these and how to run them independently, please check out the `docker.windows` file [here](test/integration/connect/envoy/docker.windows.md).
## Run tests
To run all the integration tests, you need to execute next command
```shell
go test -v -timeout=30s -tags integration ./test/integration/connect/envoy -run="TestEnvoy" -win=true
```
To run a single test case, the name should be specified. For instance, to run the `case-badauthz` test, you need to execute next command
```shell
go test -v -timeout=30m -tags integration ./test/integration/connect/envoy -run="TestEnvoy/case-badauthz" -win=true
```
> :warning: Note that the flag `-win=true` must be specified as shown in the above commands. This flag is very important because the same allows to indicate that the tests will be executed on the Windows environment. When executing the Envoy integration tests the **End of Line Sequence** of every related file and or script will be automatically changed from **LF to CRLF**.

7
test/integration/connect/envoy/case-dogstatsd-udp/verify.bats

@ -24,14 +24,11 @@ load helpers
}
@test "s1 proxy should be sending metrics to statsd" {
run retry_default cat /workdir/primary/statsd/statsd.log
run retry_default must_match_in_statsd_logs '^envoy\.' primary
echo "METRICS:"
echo "$output"
echo "COUNT: $(echo "$output" | grep -Ec '^envoy\.')"
echo "METRICS: $output"
[ "$status" == 0 ]
[ $(echo $output | grep -Ec '^envoy\.') -gt "0" ]
}
@test "s1 proxy should be sending dogstatsd tagged metrics" {

2
test/integration/connect/envoy/case-gateways-local/secondary/setup.sh

@ -9,4 +9,4 @@ register_services secondary
gen_envoy_bootstrap s2 19001 secondary
gen_envoy_bootstrap mesh-gateway 19003 secondary true
retry_default docker_consul secondary curl -s "http://localhost:8500/v1/catalog/service/consul?dc=primary" >/dev/null
retry_default docker_consul secondary curl -s "http://localhost:8500/v1/catalog/service/consul?dc=primary" > /dev/null

2
test/integration/connect/envoy/case-grpc/service_s1.hcl

@ -20,7 +20,7 @@ services {
protocol = "grpc"
envoy_dogstatsd_url = "udp://127.0.0.1:8125"
envoy_stats_tags = ["foo=bar"]
envoy_stats_flush_interval = "1s"
envoy_stats_flush_interval = "5s"
}
}
}

2
test/integration/connect/envoy/case-grpc/verify.bats

@ -43,7 +43,7 @@ load helpers
metrics_query='envoy.cluster.grpc.PingServer.total.*[#,]local_cluster:s1(,|$)'
fi
run retry_default must_match_in_statsd_logs "${metrics_query}"
run retry_long must_match_in_statsd_logs "${metrics_query}"
echo "OUTPUT: $output"
[ "$status" == 0 ]

4
test/integration/connect/envoy/case-http-badauthz/setup.sh

@ -5,10 +5,10 @@
set -eEuo pipefail
register_services primary
# Setup deny intention
setup_upsert_l4_intention s1 s2 deny
register_services primary
gen_envoy_bootstrap s1 19000 primary
gen_envoy_bootstrap s2 19001 primary

44
test/integration/connect/envoy/case-wanfed-gw/global-setup-windows.sh

@ -0,0 +1,44 @@
#!/bin/bash
# initialize the outputs for each dc
for dc in primary secondary; do
rm -rf "workdir/${dc}/tls"
mkdir -p "workdir/${dc}/tls"
done
container="consul-envoy-integ-tls-init--${CASE_NAME}"
scriptlet="
mkdir /out ;
cd /out ;
consul tls ca create ;
consul tls cert create -dc=primary -server -node=pri ;
consul tls cert create -dc=secondary -server -node=sec ;
"
docker.exe rm -f "$container" &>/dev/null || true
docker.exe run -i --net=none --name="$container" windows/consul:local bash -c "${scriptlet}"
# primary
for f in \
consul-agent-ca.pem \
primary-server-consul-0-key.pem \
primary-server-consul-0.pem \
; do
docker.exe cp "${container}:C:\\Program Files\\Git\\out\\$f" workdir/primary/tls
done
# secondary
for f in \
consul-agent-ca.pem \
secondary-server-consul-0-key.pem \
secondary-server-consul-0.pem \
; do
docker.exe cp "${container}:C:\\Program Files\\Git\\out\\$f" workdir/secondary/tls
done
# Private keys have 600 perms but tests are run as another user
chmod 666 workdir/primary/tls/primary-server-consul-0-key.pem
chmod 666 workdir/secondary/tls/secondary-server-consul-0-key.pem
docker.exe rm -f "$container" >/dev/null || true

7
test/integration/connect/envoy/case-zipkin/verify.bats

@ -35,14 +35,17 @@ load helpers
# Send traced request through upstream. Debug echoes headers back which we can
# use to get the traceID generated (no way to force one I can find with Envoy
# currently?)
run curl -s -f -H 'x-client-trace-id:test-sentinel' localhost:5000/Debug
# Fixed from /Debug -> /debug. Reason: /Debug return null
run curl -s -f -H 'x-client-trace-id:test-sentinel' localhost:5000/debug -m 5
echo "OUTPUT $output"
[ "$status" == "0" ]
# Get the traceID from the output
TRACEID=$(echo $output | grep 'X-B3-Traceid:' | cut -c 15-)
# Replaced grep by jq to filter the TraceId.
# Reason: Grep did not filter and return the entire raw string and the test was failing
TRACEID=$(echo $output | jq -rR 'split("X-B3-Traceid: ") | last' | cut -c -16)
# Get the trace from Jaeger. Won't bother parsing it just seeing it show up
# there is enough to know that the tracing config worked.

42
test/integration/connect/envoy/docker-windows.md

@ -0,0 +1,42 @@
# Docker Files for Windows Integration Tests
## Index
- [About](#about-this-file)
- [Pre-requisites](#pre-requisites)
- [Dockerfile-test-sds-server-windows](#dockerfile-test-sds-server-windows)
## About this File
In this file you will find which Dockerfiles are needed to run the Envoy integration tests on Windows, as well as information on how to run each of these files individually for testing purposes.
## Pre-requisites
After building and running the images and containers, you need to have pre-built the base images used by these Dockerfiles. See [pre-built images required in Windows](../../../../build-support-windows/BUILD-IMAGES.md)
## Dockerfile-test-sds-server-windows
This file sole purpose is to build the test-sds-server executable using Go. To do so, we use an official [golang image](https://hub.docker.com/_/golang/) provided in docker hub with Windows nano server.
To build this image you need to run the following command on your terminal:
```shell
docker build -t test-sds-server -f Dockerfile-test-sds-server-windows test-sds-server
```
This is the same command used in run-tests.sh
You can test the built file by running the following command:
```shell
docker run --rm -p 1234:1234 --name test-sds-server test-sds-server
```
If everything works properly you should get the following output:
```shell
20XX-XX-XXTXX:XX:XX.XXX-XXX [INFO] Loaded cert from file: name=ca-root
20XX-XX-XXTXX:XX:XX.XXX-XXX [INFO] Loaded cert from file: name=foo.example.com
20XX-XX-XXTXX:XX:XX.XXX-XXX [INFO] Loaded cert from file: name=wildcard.ingress.consul
20XX-XX-XXTXX:XX:XX.XXX-XXX [INFO] Loaded cert from file: name=www.example.com
20XX-XX-XXTXX:XX:XX.XXX-XXX [INFO] ==> SDS listening: addr=0.0.0.0:1234
```

BIN
test/integration/connect/envoy/docs/img/linux-arch.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
test/integration/connect/envoy/docs/img/windows-arch-singlecontainer.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

BIN
test/integration/connect/envoy/docs/img/windows-linux-arch.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

106
test/integration/connect/envoy/docs/windows-testing-architecture.md

@ -0,0 +1,106 @@
# Windows Testing Architecture
## Index
- [About](#about)
- [Testing Architectures](#testing-architectures)
- [Linux Test Architecture](#linux-test-architecture)
- [Replicating the Linux Test Architecture on Windows](#replicating-the-linux-test-architecture-on-windows)
- [Single Container Test Architecture](#single-container-test-architecture)
- [Docker Image Components](#docker-image-components)
- Main Components:
- [Bats](#bats)
- [Fortio](#fortio)
- [Jaegertracing](#jaegertracing)
- [Openzipkin](#openzipkin)
- [Socat](#socat)
- Additional tools:
- [Git Bash](#git-bash)
- [JQ](#jq)
- [Netcat](#netcat)
- [Openssl](#openssl)
## About
The purpose of this document is not only to explain why the testing architecture is different on Windows but also to describe how the Single Container test architecture is composed.
## Testing Architectures
### Linux Test Architecture
On Linux, tests take advantage of the Host network feature (only available for Linux containers). This means that every container within the network shares the host’s networking namespace. The network stack for every container that uses this network mode won’t be isolated from the Docker host and won’t get their own IP address.
![linux-architecture](./img/linux-arch.png)
Every time a test is run, a directory called workdir is created, here all the required files to run the tests are copied. Then this same directory is mounted as a **named volume**, a container with a Kubernetes pause image tagged as *envoy_workdir_1* is run to keep the volume accessible as other containers start while running the tests. Linux containers allow file system operations on runtime unlike Windows containers.
### Replicating the Linux Test Architecture on Windows
As we previously mentioned, on Windows there is no Host networking feature, so we went with NAT network instead. The main consequences of this is that now each container has their own networking stack (IP address) separated from each other, they can communicate among themselves using Docker's DNS feature (using the containers name) but no longer through localhost.
Another problem we are facing while sticking to this architecture, is that configuration files assume that every service (services run by fortio and Envoy's sidecar proxy service) are running in localhost. Though we had some partial success on modifying those files on runtime still we are finding issues related to this.
Test's assertions are composed of either function calls or curl executions, we managed this by mapping those calls to the corresponding container name.
![windows-linux-architecture](./img/windows-linux-arch.png)
Above, the failing connections are depicted. We kept the same architecture as on Linux and worked around trying to solve those connectivity issues.
Finally, after serveral tries, it was decided that instead of replicating the Linux architecture on Windows, it was more straightforward just to have a single container with all the required components to run the tests. This **single container** test architecture is the approach that works best on Windows.
## Single Container Test Architecture
As mentioned above, the single container approach, means building a Windows Docker image not only with Consul and Envoy, but also with all the tools required to execute the existing Envoy integration tests.
![windows-linux-singlecontainer](./img/windows-singlecontainer.png)
Below you can find a list and a brief description of those components.
### Docker Image Components
The Docker image used for the Consul - Envoy integration tests has several components needed to run those tests.
- Main Components:
- [Bats](#bats)
- [Fortio](#fortio)
- [Jaegertracing](#jaegertracing)
- [Openzipkin](#openzipkin)
- [Socat](#socat)
- Additional tools:
- [Git Bash](#git-bash)
- [JQ](#jq)
- [Netcat](#netcat)
- [Openssl](#openssl)
#### Bats
BATS stands for Bash Automated Testing System and is the one in charge of executing the tests.
#### Fortio
Fortio is a microservices (http, grpc) load testing library, command line tool, advanced echo server, and web UI. It is used to run the services registered into Consul during the integration tests.
#### Jaegertracing
Jaeger is open source software for tracing transactions between distributed services. It's used for monitoring and troubleshooting complex microservices environments. It is used along with Openzipkin in some test cases.
#### Openzipkin
Zipkin is also a tracing software.
#### Socat
Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. On this integration tests it is used to redirect Envoy's stats. There is no official Windows version. We are using this unofficial release available [here](https://github.com/tech128/socat-1.7.3.0-windows).
#### Git Bash
This tool is only used in Windows tests, it was added to the Docker image to be able to use some Linux commands during test execution.
#### JQ
Jq is a lightweight and flexible command-line JSON processor. It is used in several tests to modify and filter JSON outputs.
#### Netcat
Netcat is a simple program that reads and writes data across networks, much the same way that cat reads and writes data to files.
#### Openssl
Open SSL is an all-around cryptography library that offers open-source application of the TLS protocol. It is used to verify that the correct tls certificates are being provisioned during tests.

1192
test/integration/connect/envoy/helpers.windows.bash

File diff suppressed because it is too large Load Diff

100
test/integration/connect/envoy/main_test.go

@ -7,6 +7,9 @@
package envoy
import (
"flag"
"io/ioutil"
"log"
"os"
"os/exec"
"sort"
@ -16,11 +19,23 @@ import (
"github.com/stretchr/testify/require"
)
var (
flagWin = flag.Bool("win", false, "Execute tests on windows")
)
func TestEnvoy(t *testing.T) {
flag.Parse()
if *flagWin == true {
dir := "../../../"
check_dir_files(dir)
}
testcases, err := discoverCases()
require.NoError(t, err)
runCmd(t, "suite_setup")
defer runCmd(t, "suite_teardown")
for _, tc := range testcases {
@ -40,7 +55,8 @@ func TestEnvoy(t *testing.T) {
}
}
func runCmd(t *testing.T, c string, env ...string) {
func runCmdLinux(t *testing.T, c string, env ...string) {
t.Helper()
cmd := exec.Command("./run-tests.sh", c)
@ -52,6 +68,34 @@ func runCmd(t *testing.T, c string, env ...string) {
}
}
func runCmdWindows(t *testing.T, c string, env ...string) {
t.Helper()
param_5 := "false"
if env != nil {
param_5 = strings.Join(env, " ")
}
cmd := exec.Command("cmd", "/C", "bash run-tests.windows.sh", c, param_5)
cmd.Env = append(os.Environ(), env...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
t.Fatalf("command failed: %v", err)
}
}
func runCmd(t *testing.T, c string, env ...string) {
t.Helper()
if *flagWin == true {
runCmdWindows(t, c, env...)
} else {
runCmdLinux(t, c, env...)
}
}
// Discover the cases so we pick up both oss and ent copies.
func discoverCases() ([]string, error) {
cwd, err := os.Getwd()
@ -74,3 +118,57 @@ func discoverCases() ([]string, error) {
sort.Strings(out)
return out, nil
}
// CRLF convert functions
// Recursively iterates through the directory passed by parameter looking for the sh and bash files.
// Upon finding them, it calls crlf_file_check.
func check_dir_files(path string) {
files, err := ioutil.ReadDir(path)
if err != nil {
log.Fatal(err)
}
for _, fil := range files {
v := strings.Split(fil.Name(), ".")
file_extension := v[len(v)-1]
file_path := path + "/" + fil.Name()
if fil.IsDir() == true {
check_dir_files(file_path)
}
if file_extension == "sh" || file_extension == "bash" {
crlf_file_check(file_path)
}
}
}
// Check if a file contains CRLF line endings if so call crlf_normalize
func crlf_file_check(file_name string) {
file, err := ioutil.ReadFile(file_name)
text := string(file)
if edit := crlf_verify(text); edit != -1 {
crlf_normalize(file_name, text)
}
if err != nil {
log.Fatal(err)
}
}
// Checks for the existence of CRLF line endings.
func crlf_verify(text string) int {
position := strings.Index(text, "\r\n")
return position
}
// Replace CRLF line endings with LF.
func crlf_normalize(filename, text string) {
text = strings.Replace(text, "\r\n", "\n", -1)
data := []byte(text)
ioutil.WriteFile(filename, data, 0644)
}

908
test/integration/connect/envoy/run-tests.windows.sh

@ -0,0 +1,908 @@
#!/usr/bin/env bash
if [ $2 != "false" ]
then
export $2
fi
readonly self_name="$0"
readonly HASHICORP_DOCKER_PROXY="docker.mirror.hashicorp.services"
readonly SINGLE_CONTAINER_BASE_NAME=envoy_consul
# DEBUG=1 enables set -x for this script so echos every command run
DEBUG=${DEBUG:-}
XDS_TARGET=${XDS_TARGET:-server}
# ENVOY_VERSION to run each test against
ENVOY_VERSION=${ENVOY_VERSION:-"1.23.1"}
export ENVOY_VERSION
export DOCKER_BUILDKIT=0
if [ ! -z "$DEBUG" ] ; then
set -x
fi
source helpers.windows.bash
function command_error {
echo "ERR: command exited with status $1" 1>&2
echo " command: $2" 1>&2
echo " line: $3" 1>&2
echo " function: $4" 1>&2
echo " called at: $5" 1>&2
# printf '%s\n' "${FUNCNAME[@]}"
# printf '%s\n' "${BASH_SOURCE[@]}"
# printf '%s\n' "${BASH_LINENO[@]}"
}
trap 'command_error $? "${BASH_COMMAND}" "${LINENO}" "${FUNCNAME[0]:-main}" "${BASH_SOURCE[0]}:${BASH_LINENO[0]}"' ERR
readonly WORKDIR_SNIPPET="-v envoy_workdir:C:\workdir"
function network_snippet {
local DC="$1"
echo "--net=envoy-tests"
}
function aws_snippet {
LAMBDA_TESTS_ENABLED=${LAMBDA_TESTS_ENABLED:-false}
if [ "$LAMBDA_TESTS_ENABLED" != false ]; then
local snippet=""
# The Lambda integration cases assume that a Lambda function exists in $AWS_REGION with an ARN of $AWS_LAMBDA_ARN.
# The AWS credentials must have permission to invoke the Lambda function.
[ -n "$(set | grep '^AWS_ACCESS_KEY_ID=')" ] && snippet="${snippet} -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID"
[ -n "$(set | grep '^AWS_SECRET_ACCESS_KEY=')" ] && snippet="${snippet} -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY"
[ -n "$(set | grep '^AWS_SESSION_TOKEN=')" ] && snippet="${snippet} -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN"
[ -n "$(set | grep '^AWS_LAMBDA_REGION=')" ] && snippet="${snippet} -e AWS_LAMBDA_REGION=$AWS_LAMBDA_REGION"
[ -n "$(set | grep '^AWS_LAMBDA_ARN=')" ] && snippet="${snippet} -e AWS_LAMBDA_ARN=$AWS_LAMBDA_ARN"
echo "$snippet"
fi
}
function init_workdir {
local CLUSTER="$1"
if test -z "$CLUSTER"
then
CLUSTER=primary
fi
# Note, we use explicit set of dirs so we don't delete .gitignore. Also,
# don't wipe logs between runs as they are already split and we need them to
# upload as artifacts later.
rm -rf workdir/${CLUSTER}
rm -rf workdir/logs
mkdir -p workdir/${CLUSTER}/{consul,consul-server,register,envoy,bats,statsd,data}
# Reload consul config from defaults
cp consul-base-cfg/*.hcl workdir/${CLUSTER}/consul/
# Add any overrides if there are any (no op if not)
find ${CASE_DIR} -maxdepth 1 -name '*.hcl' -type f -exec cp -f {} workdir/${CLUSTER}/consul \;
# Copy all the test files
find ${CASE_DIR} -maxdepth 1 -name '*.bats' -type f -exec cp -f {} workdir/${CLUSTER}/bats \;
# Copy CLUSTER specific bats
cp helpers.windows.bash workdir/${CLUSTER}/bats/helpers.bash
# Add any CLUSTER overrides
if test -d "${CASE_DIR}/${CLUSTER}"
then
find ${CASE_DIR}/${CLUSTER} -type f -name '*.hcl' -exec cp -f {} workdir/${CLUSTER}/consul \;
find ${CASE_DIR}/${CLUSTER} -type f -name '*.bats' -exec cp -f {} workdir/${CLUSTER}/bats \;
fi
# move all of the registration files OUT of the consul config dir now
find workdir/${CLUSTER}/consul -type f -name 'service_*.hcl' -exec mv -f {} workdir/${CLUSTER}/register \;
# move the server.hcl out of the consul dir so that it doesn't get picked up
# by the client agent (if we're running with XDS_TARGET=client).
if test -f "workdir/${CLUSTER}/consul/server.hcl"
then
mv workdir/${CLUSTER}/consul/server.hcl workdir/${CLUSTER}/consul-server/server.hcl
fi
# copy the ca-certs for SDS so we can verify the right ones are served
mkdir -p workdir/test-sds-server/certs
cp test-sds-server/certs/ca-root.crt workdir/test-sds-server/certs/ca-root.crt
if test -d "${CASE_DIR}/data"
then
cp -r ${CASE_DIR}/data/* workdir/${CLUSTER}/data
fi
return 0
}
function docker_kill_rm {
local name
local todo=()
for name in "$@"; do
name="envoy_${name}_1"
if docker.exe container inspect $name &>/dev/null; then
if [[ "$name" == envoy_tcpdump-* ]]; then
echo -n "Gracefully stopping $name..."
docker.exe stop $name &> /dev/null
echo "done"
fi
todo+=($name)
fi
done
if [[ ${#todo[@]} -eq 0 ]]; then
return 0
fi
echo -n "Killing and removing: ${todo[@]}..."
docker.exe rm -v -f ${todo[@]} &> /dev/null
echo "done"
}
function start_consul {
local DC=${1:-primary}
# 8500/8502 are for consul
# 9411 is for zipkin which shares the network with consul
# 16686 is for jaeger ui which also shares the network with consul
ports=(
'-p=8500:8500'
'-p=8502:8502'
'-p=9411:9411'
'-p=16686:16686'
)
case "$DC" in
secondary)
ports=(
'-p=9500:8500'
'-p=9502:8502'
)
;;
alpha)
ports=(
'-p=9510:8500'
'-p=9512:8502'
)
;;
esac
license="${CONSUL_LICENSE:-}"
# load the consul license so we can pass it into the consul
# containers as an env var in the case that this is a consul
# enterprise test
if test -z "$license" -a -n "${CONSUL_LICENSE_PATH:-}"
then
license=$(cat $CONSUL_LICENSE_PATH)
fi
# We currently run these integration tests in two modes: one in which Envoy's
# xDS sessions are served directly by a Consul server, and another in which it
# goes through a client agent.
#
# This is necessary because servers and clients source configuration data in
# different ways (client agents use an RPC-backed cache and servers use their
# own local data) and we want to catch regressions in both.
#
# In the future we should also expand these tests to register services to the
# catalog directly (agentless) rather than relying on the server also being
# an agent.
#
# When XDS_TARGET=client we'll start a Consul server with its gRPC port
# disabled (but only if REQUIRE_PEERS is not set), and a client agent with
# its gRPC port enabled.
#
# When XDS_TARGET=server (or anything else) we'll run a single Consul server
# with its gRPC port enabled.
#
# In either case, the hostname `consul-${DC}-server` should be used as a
# server address (e.g. for WAN joining) and `consul-${DC}-client` should be
# used as a client address (e.g. for interacting with the HTTP API).
#
# Both hostnames work in both modes because we set network aliases on the
# containers such that both hostnames will resolve to the same container when
# XDS_TARGET=server.
#
# We also join containers to the network `container:consul-${DC}_1` in many
# places (see: network_snippet) so that we can curl localhost etc. In both
# modes, you can assume that this name refers to the client's container.
#
# Any .hcl files in the case/cluster directory will be given to both clients
# and servers (via the -config-dir flag) *except for* server.hcl which will
# only be applied to the server (and service registrations which will be made
# against the client).
if [[ "$XDS_TARGET" == "client" ]]
then
docker_kill_rm consul-${DC}-server
docker_kill_rm consul-${DC}
server_grpc_port="-1"
if is_set $REQUIRE_PEERS; then
server_grpc_port="8502"
fi
docker.exe run -d --name envoy_consul-${DC}-server_1 \
--net=envoy-tests \
$WORKDIR_SNIPPET \
--hostname "consul-${DC}-server" \
--network-alias "consul-${DC}-server" \
-e "CONSUL_LICENSE=$license" \
windows/consul:local \
agent -dev -datacenter "${DC}" \
-config-dir "C:\\workdir\\${DC}\\consul" \
-config-dir "C:\\workdir\\${DC}\\consul-server" \
-grpc-port $server_grpc_port \
-client "0.0.0.0" \
-bind "0.0.0.0" >/dev/null
docker.exe run -d --name envoy_consul-${DC}_1 \
--net=envoy-tests \
$WORKDIR_SNIPPET \
--hostname "consul-${DC}-client" \
--network-alias "consul-${DC}-client" \
-e "CONSUL_LICENSE=$license" \
${ports[@]} \
windows/consul:local \
agent -datacenter "${DC}" \
-config-dir "C:\\workdir\\${DC}\\consul" \
-data-dir "/tmp/consul" \
-client "0.0.0.0" \
-grpc-port 8502 \
-datacenter "${DC}" \
-retry-join "consul-${DC}-server" >/dev/null
else
docker_kill_rm consul-${DC}
docker.exe run -d --name envoy_consul-${DC}_1 \
--net=envoy-tests \
$WORKDIR_SNIPPET \
--memory 4096m \
--cpus 2 \
--hostname "consul-${DC}" \
--network-alias "consul-${DC}-client" \
--network-alias "consul-${DC}-server" \
-e "CONSUL_LICENSE=$license" \
${ports[@]} \
windows/consul:local \
agent -dev -datacenter "${DC}" \
-config-dir "C:\\workdir\\${DC}\\consul" \
-config-dir "C:\\workdir\\${DC}\\consul-server" \
-client "0.0.0.0" >/dev/null
fi
}
function start_partitioned_client {
local PARTITION=${1:-ap1}
# Start consul now as setup script needs it up
docker_kill_rm consul-${PARTITION}
license="${CONSUL_LICENSE:-}"
# load the consul license so we can pass it into the consul
# containers as an env var in the case that this is a consul
# enterprise test
if test -z "$license" -a -n "${CONSUL_LICENSE_PATH:-}"
then
license=$(cat $CONSUL_LICENSE_PATH)
fi
sh -c "rm -rf /workdir/${PARTITION}/data"
# Run consul and expose some ports to the host to make debugging locally a
# bit easier.
#
docker.exe run -d --name envoy_consul-${PARTITION}_1 \
--net=envoy-tests \
$WORKDIR_SNIPPET \
--hostname "consul-${PARTITION}-client" \
--network-alias "consul-${PARTITION}-client" \
-e "CONSUL_LICENSE=$license" \
windows/consul:local agent \
-datacenter "primary" \
-retry-join "consul-primary-server" \
-grpc-port 8502 \
-data-dir "/tmp/consul" \
-config-dir "C:\\workdir\\${PARTITION}/consul" \
-client "0.0.0.0" >/dev/null
}
function pre_service_setup {
local CLUSTER=${1:-primary}
# Run test case setup (e.g. generating Envoy bootstrap, starting containers)
if [ -f "${CASE_DIR}/${CLUSTER}/setup.sh" ]
then
source ${CASE_DIR}/${CLUSTER}/setup.sh
else
source ${CASE_DIR}/setup.sh
fi
}
function start_services {
# Start containers required
if [ ! -z "$REQUIRED_SERVICES" ] ; then
docker_kill_rm $REQUIRED_SERVICES
run_containers $REQUIRED_SERVICES
fi
return 0
}
function verify {
local CLUSTER="$1"
if test -z "$CLUSTER"; then
CLUSTER="primary"
fi
# Execute tests
res=0
# Nuke any previous case's verify container.
docker_kill_rm verify-${CLUSTER}
echo "Running ${CLUSTER} verification step for ${CASE_DIR}..."
# need to tell the PID 1 inside of the container that it won't be actual PID
# 1 because we're using --pid=host so we use TINI_SUBREAPER
if docker.exe exec -i ${SINGLE_CONTAINER_BASE_NAME}-${CLUSTER}_1 bash \
-c "TINI_SUBREAPER=1 \
ENVOY_VERSION=${ENVOY_VERSION} \
XDS_TARGET=${XDS_TARGET} \
/c/bats/bin/bats \
--pretty /c/workdir/${CLUSTER}/bats" ; then
echo "✓ PASS"
else
echo "⨯ FAIL"
res=1
fi
return $res
}
function capture_logs {
local LOG_DIR="workdir/logs/${CASE_DIR}/${ENVOY_VERSION}"
init_vars
echo "Capturing Logs"
mkdir -p "$LOG_DIR"
services="$REQUIRED_SERVICES consul-primary"
if [[ "$XDS_TARGET" == "client" ]]
then
services="$services consul-primary-server"
fi
if is_set $REQUIRE_SECONDARY
then
services="$services consul-secondary"
if [[ "$XDS_TARGET" == "client" ]]
then
services="$services consul-secondary-server"
fi
fi
if is_set $REQUIRE_PARTITIONS
then
services="$services consul-ap1"
fi
if is_set $REQUIRE_PEERS
then
services="$services consul-alpha"
if [[ "$XDS_TARGET" == "client" ]]
then
services="$services consul-alpha-server"
fi
fi
if [ -f "${CASE_DIR}/capture.sh" ]
then
echo "Executing ${CASE_DIR}/capture.sh"
source ${CASE_DIR}/capture.sh || true
fi
for cont in $services; do
echo "Capturing log for $cont"
docker.exe logs "envoy_${cont}_1" &> "${LOG_DIR}/${cont}.log" || {
echo "EXIT CODE $?" > "${LOG_DIR}/${cont}.log"
}
done
}
function stop_services {
# Teardown
docker_kill_rm $REQUIRED_SERVICES
docker_kill_rm consul-primary consul-primary-server consul-secondary consul-secondary-server consul-ap1 consul-alpha consul-alpha-server
}
function init_vars {
source "defaults.sh"
if [ -f "${CASE_DIR}/vars.sh" ] ; then
source "${CASE_DIR}/vars.sh"
fi
}
function global_setup {
if [ -f "${CASE_DIR}/global-setup-windows.sh" ] ; then
source "${CASE_DIR}/global-setup-windows.sh"
fi
}
function wipe_volumes {
docker.exe exec -w "C:\workdir" envoy_workdir_1 cmd /c "rd /s /q . 2>nul"
}
# Windows containers does not allow cp command while running.
function stop_and_copy_files {
# Create CMD file to execute within the container
echo "icacls C:\workdir /grant:r Everyone:(OI)(CI)F /T" > copy.cmd
echo "XCOPY C:\workdir_bak C:\workdir /e /h /c /i /y" > copy.cmd
# Stop dummy container to copy local workdir to container's workdir_bak
docker.exe stop envoy_workdir_1 > /dev/null
docker.exe cp workdir/. envoy_workdir_1:/workdir_bak
# Copy CMD file into container
docker.exe cp copy.cmd envoy_workdir_1:/
# Start dummy container and execute the CMD file
docker.exe start envoy_workdir_1 > /dev/null
docker.exe exec envoy_workdir_1 copy.cmd
# Delete local CMD file after execution
rm copy.cmd
}
function run_tests {
CASE_DIR="${CASE_DIR?CASE_DIR must be set to the path of the test case}"
CASE_NAME=$( basename $CASE_DIR | cut -c6- )
export CASE_NAME
export SKIP_CASE=""
init_vars
# Initialize the workdir
init_workdir primary
if is_set $REQUIRE_SECONDARY
then
init_workdir secondary
fi
if is_set $REQUIRE_PARTITIONS
then
init_workdir ap1
fi
if is_set $REQUIRE_PEERS
then
init_workdir alpha
fi
global_setup
# Allow vars.sh to set a reason to skip this test case based on the ENV
if [ "$SKIP_CASE" != "" ] ; then
echo "SKIPPING CASE: $SKIP_CASE"
return 0
fi
# Wipe state
wipe_volumes
# Copying base files to shared volume
stop_and_copy_files
# Starting Consul primary cluster
start_consul primary
if is_set $REQUIRE_SECONDARY; then
start_consul secondary
fi
if is_set $REQUIRE_PARTITIONS; then
docker_consul "primary" consul partition create -name ap1 > /dev/null
start_partitioned_client ap1
fi
if is_set $REQUIRE_PEERS; then
start_consul alpha
fi
echo "Setting up the primary datacenter"
pre_service_setup primary
if is_set $REQUIRE_SECONDARY; then
echo "Setting up the secondary datacenter"
pre_service_setup secondary
fi
if is_set $REQUIRE_PARTITIONS; then
echo "Setting up the non-default partition"
pre_service_setup ap1
fi
if is_set $REQUIRE_PEERS; then
echo "Setting up the alpha peer"
pre_service_setup alpha
fi
echo "Starting services"
start_services
# Run the verify container and report on the output
echo "Verifying the primary datacenter"
verify primary
if is_set $REQUIRE_SECONDARY; then
echo "Verifying the secondary datacenter"
verify secondary
fi
if is_set $REQUIRE_PEERS; then
echo "Verifying the alpha peer"
verify alpha
fi
}
function test_teardown {
init_vars
stop_services
}
function workdir_cleanup {
docker_kill_rm workdir
docker.exe volume rm -f envoy_workdir &>/dev/null || true
}
function suite_setup {
# Cleanup from any previous unclean runs.
suite_teardown
docker.exe network create -d "nat" envoy-tests &>/dev/null
# Start the volume container
#
# This is a dummy container that we use to create volume and keep it
# accessible while other containers are down.
docker.exe volume create envoy_workdir &>/dev/null
docker.exe run -d --name envoy_workdir_1 \
$WORKDIR_SNIPPET \
--user ContainerAdministrator \
--net=none \
"${HASHICORP_DOCKER_PROXY}/windows/kubernetes/pause" &>/dev/null
# pre-build the consul+envoy container
echo "Rebuilding 'windows/consul:local' image with envoy $ENVOY_VERSION..."
retry_default docker.exe build -t windows/consul:local \
--build-arg ENVOY_VERSION=${ENVOY_VERSION} \
-f Dockerfile-consul-envoy-windows .
local CONSUL_VERSION=$(docker image inspect --format='{{.ContainerConfig.Labels.version}}' \
windows/consul:local)
echo "Running Tests with Consul=$CONSUL_VERSION - Envoy=$ENVOY_VERSION - XDS_TARGET=$XDS_TARGET"
}
function suite_teardown {
docker_kill_rm verify-primary verify-secondary verify-alpha
# this is some hilarious magic
docker_kill_rm $(grep "^function run_container_" $self_name | \
sed 's/^function run_container_\(.*\) {/\1/g')
docker_kill_rm consul-primary consul-primary-server consul-secondary consul-secondary-server consul-ap1 consul-alpha consul-alpha-server
if docker.exe network inspect envoy-tests &>/dev/null ; then
echo -n "Deleting network 'envoy-tests'..."
docker.exe network rm envoy-tests
echo "done"
fi
workdir_cleanup
}
function run_containers {
for name in $@ ; do
run_container $name
done
}
function run_container {
docker_kill_rm "$1"
"run_container_$1"
}
function common_run_container_service {
local service="$1"
local CLUSTER="$2"
local httpPort="$3"
local grpcPort="$4"
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$CLUSTER"_1
docker.exe exec -d $CONTAINER_NAME bash \
-c "FORTIO_NAME=${service} \
fortio.exe server \
-http-port ":$httpPort" \
-grpc-port ":$grpcPort" \
-redirect-port disabled"
}
function run_container_s1 {
common_run_container_service s1 primary 8080 8079
}
function run_container_s1-ap1 {
common_run_container_service s1 ap1 8080 8079
}
function run_container_s2 {
common_run_container_service s2 primary 8181 8179
}
function run_container_s2-v1 {
common_run_container_service s2-v1 primary 8182 8178
}
function run_container_s2-v2 {
common_run_container_service s2-v2 primary 8183 8177
}
function run_container_s3 {
common_run_container_service s3 primary 8282 8279
}
function run_container_s3-v1 {
common_run_container_service s3-v1 primary 8283 8278
}
function run_container_s3-v2 {
common_run_container_service s3-v2 primary 8284 8277
}
function run_container_s3-alt {
common_run_container_service s3-alt primary 8286 8280
}
function run_container_s4 {
common_run_container_service s4 primary 8382 8281
}
function run_container_s1-secondary {
common_run_container_service s1-secondary secondary 8080 8079
}
function run_container_s2-secondary {
common_run_container_service s2-secondary secondary 8181 8179
}
function run_container_s2-ap1 {
common_run_container_service s2 ap1 8480 8479
}
function run_container_s3-ap1 {
common_run_container_service s3 ap1 8580 8579
}
function run_container_s1-alpha {
common_run_container_service s1-alpha alpha 8080 8079
}
function run_container_s2-alpha {
common_run_container_service s2-alpha alpha 8181 8179
}
function run_container_s3-alpha {
common_run_container_service s3-alpha alpha 8282 8279
}
function common_run_container_sidecar_proxy {
local service="$1"
local CLUSTER="$2"
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$CLUSTER"_1
# Hot restart breaks since both envoys seem to interact with each other
# despite separate containers that don't share IPC namespace. Not quite
# sure how this happens but may be due to unix socket being in some shared
# location?
docker.exe exec -d $CONTAINER_NAME bash \
-c "envoy.exe \
-c /c/workdir/${CLUSTER}/envoy/${service}-bootstrap.json \
-l trace \
--disable-hot-restart \
--drain-time-s 1 >/dev/null"
}
function run_container_s1-sidecar-proxy {
common_run_container_sidecar_proxy s1 primary
}
function run_container_s1-ap1-sidecar-proxy {
common_run_container_sidecar_proxy s1 ap1
}
function run_container_s1-sidecar-proxy-consul-exec {
local CLUSTER="primary"
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$CLUSTER"_1
local ADMIN_HOST="127.0.0.1"
local ADMIN_PORT="19000"
docker.exe exec -d $CONTAINER_NAME bash \
-c "consul connect envoy -sidecar-for s1 \
-http-addr $CONTAINER_NAME:8500 \
-grpc-addr $CONTAINER_NAME:8502 \
-admin-bind $ADMIN_HOST:$ADMIN_PORT \
-envoy-version ${ENVOY_VERSION} \
-- \
-l trace >/dev/null"
}
function run_container_s2-sidecar-proxy {
common_run_container_sidecar_proxy s2 primary
}
function run_container_s2-v1-sidecar-proxy {
common_run_container_sidecar_proxy s2-v1 primary
}
function run_container_s2-v2-sidecar-proxy {
common_run_container_sidecar_proxy s2-v2 primary
}
function run_container_s3-sidecar-proxy {
common_run_container_sidecar_proxy s3 primary
}
function run_container_s3-v1-sidecar-proxy {
common_run_container_sidecar_proxy s3-v1 primary
}
function run_container_s3-v2-sidecar-proxy {
common_run_container_sidecar_proxy s3-v2 primary
}
function run_container_s3-alt-sidecar-proxy {
common_run_container_sidecar_proxy s3-alt primary
}
function run_container_s1-sidecar-proxy-secondary {
common_run_container_sidecar_proxy s1 secondary
}
function run_container_s2-sidecar-proxy-secondary {
common_run_container_sidecar_proxy s2 secondary
}
function run_container_s2-ap1-sidecar-proxy {
common_run_container_sidecar_proxy s2 ap1
}
function run_container_s3-ap1-sidecar-proxy {
common_run_container_sidecar_proxy s3 ap1
}
function run_container_s1-sidecar-proxy-alpha {
common_run_container_sidecar_proxy s1 alpha
}
function run_container_s2-sidecar-proxy-alpha {
common_run_container_sidecar_proxy s2 alpha
}
function run_container_s3-sidecar-proxy-alpha {
common_run_container_sidecar_proxy s3 alpha
}
function common_run_container_gateway {
local name="$1"
local DC="$2"
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$DC"_1
# Hot restart breaks since both envoys seem to interact with each other
# despite separate containers that don't share IPC namespace. Not quite
# sure how this happens but may be due to unix socket being in some shared
# location?
docker.exe exec -d $CONTAINER_NAME bash \
-c "envoy.exe \
-c /c/workdir/${DC}/envoy/${name}-bootstrap.json \
-l trace \
--disable-hot-restart \
--drain-time-s 1 >/dev/null"
}
function run_container_gateway-primary {
common_run_container_gateway mesh-gateway primary
}
function run_container_gateway-secondary {
common_run_container_gateway mesh-gateway secondary
}
function run_container_gateway-alpha {
common_run_container_gateway mesh-gateway alpha
}
function run_container_ingress-gateway-primary {
common_run_container_gateway ingress-gateway primary
}
function run_container_api-gateway-primary {
common_run_container_gateway api-gateway primary
}
function run_container_terminating-gateway-primary {
common_run_container_gateway terminating-gateway primary
}
function run_container_fake-statsd {
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"primary"_1
# This magic SYSTEM incantation is needed since Envoy doesn't add newlines and so
# we need each packet to be passed to echo to add a new line before
# appending. But it does not work on Windows.
docker.exe exec -d $CONTAINER_NAME bash -c "socat -u UDP-RECVFROM:8125,fork,reuseaddr OPEN:/workdir/primary/statsd/statsd.log,create,append"
}
function run_container_zipkin {
docker.exe run -d --name $(container_name) \
$WORKDIR_SNIPPET \
$(network_snippet primary) \
"${HASHICORP_DOCKER_PROXY}/windows/openzipkin"
}
function run_container_jaeger {
echo "Starting Jaeger service..."
local DC=${1:-primary}
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$DC"_1
docker.exe exec -d $CONTAINER_NAME bash -c "jaeger-all-in-one.exe \
--collector.zipkin.http-port=9411"
}
function run_container_test-sds-server {
echo "Starting test-sds-server"
local DC=${1:-primary}
local CONTAINER_NAME="$SINGLE_CONTAINER_BASE_NAME"-"$DC"_1
docker.exe exec -d $CONTAINER_NAME bash -c "cd /c/test-sds-server &&
./test-sds-server.exe"
}
function container_name {
echo "envoy_${FUNCNAME[1]/#run_container_/}_1"
}
function container_name_prev {
echo "envoy_${FUNCNAME[2]/#run_container_/}_1"
}
# This is a debugging tool. Run via 'bash run-tests.sh debug_dump_volumes' on Powershell
function debug_dump_volumes {
local LINUX_PATH=$(pwd)
local WIN_PATH=$( echo "$LINUX_PATH" | sed 's/^\/mnt//' | sed -e 's/^\///' -e 's/\//\\/g' -e 's/^./\0:/' )
docker.exe run -it \
$WORKDIR_SNIPPET \
-v "$WIN_PATH":"C:\\cwd" \
--net=none \
"${HASHICORP_DOCKER_PROXY}/windows/nanoserver:1809" \
cmd /c "xcopy \workdir \cwd\workdir /E /H /C /I /Y"
}
function run_container_tcpdump-primary {
# To use add "tcpdump-primary" to REQUIRED_SERVICES
common_run_container_tcpdump primary
}
function run_container_tcpdump-secondary {
# To use add "tcpdump-secondary" to REQUIRED_SERVICES
common_run_container_tcpdump secondary
}
function run_container_tcpdump-alpha {
# To use add "tcpdump-alpha" to REQUIRED_SERVICES
common_run_container_tcpdump alpha
}
function common_run_container_tcpdump {
local DC="$1"
# we cant run this in circle but its only here to temporarily enable.
# docker.exe build --rm=false -t envoy-tcpdump -f Dockerfile-tcpdump-windows .
docker.exe run -d --name $(container_name_prev) \
$(network_snippet $DC) \
envoy-tcpdump \
-v -i any \
-w "/data/${DC}.pcap"
}
case "${1-}" in
"")
echo "command required"
exit 1 ;;
*)
"$@" ;;
esac

90
test/integration/connect/envoy/windows-troubleshooting.md

@ -0,0 +1,90 @@
# Envoy Integration Tests on Windows
## Index
- [About this Guide](#about-this-guide)
- [Prerequisites](#prerequisites)
- [Running the Tests](#running-the-tests)
- [Troubleshooting](#troubleshooting)
- [About Envoy Integration Tests on Windows](#about-envoy-integration-tests-on-windows)
- [Common Errors](#common-errors)
- [Windows Scripts Changes](#windows-scripts-changes)
- [Volume Issues](#volume-issues)
## About this Guide
On this guide you will find all the information required to run the Envoy integration tests on Windows.
## Prerequisites
To run the integration tests yo will need to have the following installed on your System:
- GO v1.18(or later).
- Gotestsum library [installation](https://pkg.go.dev/gotest.tools/gotestsum).
- Docker.
Before running the tests, you will need to build the required Docker images, to do so, you can use the script provided [here](../../../../build-support-windows/build-images.sh):
- Build Images Script Execution
- From a Bash console (GitBash or WSL) execute: `./build-images.sh`
## Running the Tests
To execute the tests you need to run the following command depending on the shell you are using:
**On Powershell**:
`go test -v -timeout=30m -tags integration ./test/integration/connect/envoy -run="TestEnvoy/<TEST CASE>" -win=true`
Where **TEST CASE** is the individual test case we want to execute (e.g. case-badauthz).
**On Git Bash**:
`ENVOY_VERSION=<ENVOY VERSION> go test -v -timeout=30m -tags integration ./test/integration/connect/envoy -run="TestEnvoy/<TEST CASE>" -win=true`
Where **TEST CASE** is the individual test case we want to execute (e.g. case-badauthz), and **ENVOY VERSION** is the version which you are currently testing.
> [!TIP]
> When executing the integration tests using **Powershell** you may need to set the ENVOY_VERSION value manually in line 20 of the [run-tests.windows.sh](run-tests.windows.sh) file.
> [!WARNING]
> When executing the integration tests for Windows environments, the **End of Line Sequence** of every related file and/or script will be changed from **LF** to **CRLF**.
### About Envoy Integration Tests on Windows
Integration tests on Linux run a multi-container architecture that take advantage of the Host Network Docker feature, using this feature means that the container's network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated (read more about this [here](https://docs.docker.com/network/host/)). This feature is only available for Linux, which made migrating the tests to Windows challenging, since replicating the same architecture created more issues, that's why a **single container** architecture was chosen to run the Envoy integration tests.
Using a single container architecture meant that we could use the same tests as on linux, moreover we were able to speed-up their execution by replacing *docker run* commands which started utility containers, for *docker exec* commands.
### Common errors
If the tests are executed without docker running, the following error will be seen:
```powershell
error during connect: This error may indicate that the docker daemon is not running.: Post "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile-bats-windows&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=bats-verify&target=&ulimits=null&version=1": open //./pipe/docker_engine: The system cannot find the file specified.
```
If any of the docker images does not exist or is mistagged, an error similar to the following will be displayed:
```powershell
Error response from daemon: No such container: envoy_workdir_1
```
If you run the Windows tests from WSL you will get the following error message:
```bash
main_test.go:34: command failed: exec: "cmd": executable file not found in $PATH
```
## Windows Scripts Changes
- The "http-addr", "grpc-addr" and "admin-access-log-path" flags were added to the creation of the Envoy Bootstrap files.
- To execute commands sh was replaced by bash on our Windows container.
- All paths were updated to use Windows format.
- Created *stop_and_copy_files* function to copy files into the shared volume (see [volume issues](#volume-issues)).
- Changed the *-admin-bind* value from `0.0.0.0` to `127.0.0.1` when generating the Envoy Bootstrap files.
- Removed the *&&* from the *common_run_container_service's* docker exec command and replaced it with *\*.
- Removed *docker_wget* and *docker_curl* functions from [helpers.windows.bash](helpers.windows.bash) file and replaced them with **docker_consul_exec**, this way we avoid starting intermediate containers when capturing logs.
- The function *wipe_volumes* uses a `docker exec` command instead of the original `docker run`, this way we speed up test execution by avoiding to start a new container just to delete volume content before each test run.
- For **case-grpc** we increased the `envoy_stats_flush_interval` value from 1s to 5s, on Windows, the original value caused the test to pass or fail randomly.
- For **case-wanfed-gw** a new script was created: **global-setup-windows.sh**, this file replaces global-setup.sh when running this test in Windows. The new script uses the windows/consul:local Docker image to generate the required TLS files and copies them into host's workdir directory.
- To use the **debug_dump_volumes** function, you need to use it via Powershell and execute the following command: `bash run-tests.windows.sh debug_dump_volumes` Make sure to be positioned with your terminal in the correct directory.
- For **case-consul-exec** this case can only be run when using the consul-dev Docker image on this repository, since it relies on features implemented only here. These features are: Windows valid default value for "-admin-access-log-path" and `consul connect envoy` command starts Envoy. This features have also been submitted in [PR#15114](https://github.com/hashicorp/consul/pull/15114).
## Volume Issues
Another difference that arose when migrating the tests from Linux to Windows, is that file system operations can't be executed while Windows containers are running. Currently, when running the tests a **named volume** is created and all of the required files are copied into that volume. Because of the constraint mentioned before, the workaround we implemented was creating a function (**stop_and_copy_files**) that stops the *kubernetes/pause* container and executes a script to copy the required files and finally starts the container again.
Loading…
Cancel
Save