adding expanded release docs (#6237)

Signed-off-by: matttrach <matttrach@gmail.com>
pull/6557/head
Matt Trachier 2022-12-02 16:27:02 -06:00 committed by GitHub
parent b255b07de2
commit 95bb3dce97
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 587 additions and 0 deletions

View File

@ -0,0 +1,7 @@
# Generate Build Container
1. set env variable PATH_TO_KUBERNETES_REPO to the path to your local kubernetes/kubernetes copy: `export PATH_TO_KUBERNETES_REPO="/Users/mtrachier/go/src/github.com/kubernetes/kubernetes"`
1. set env variable GOVERSION to the expected version of go for the kubernetes/kubernetes version checked out: `export GOVERSION=$(yq -e '.dependencies[] | select(.name == "golang: upstream version").version' $PATH_TO_KUBERNETES_REPO/build/dependencies.yaml)`
1. set env variable GOIMAGE to the expected container image to base our custom build image on: `export GOIMAGE="golang:${GOVERSION}-alpine3.15"`
1. set env variable BUILD_CONTAINER to the contents of a dockerfile for the build container: `export BUILD_CONTAINER="FROM ${GOIMAGE}\nRUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"`
1. use Docker to create the build container: `echo -e $BUILD_CONTAINER | docker build -t ${GOIMAGE}-dev -`

View File

@ -0,0 +1,13 @@
# Update Channel Server
Once the release is verified, the channel server config needs to be updated to reflect the new version for “stable”.  
1. Channel.yaml can be found at the [root of the K3s repo.](https://github.com/k3s-io/k3s/blob/master/channel.yaml)
1. When updating the channel server a single-line change will need to be performed.  
1. Release Captains responsible for this change will need to update the following stanza to reflect the new stable version of kubernetes relative to the release in progress.  
1. Example:
```
channels:
name: stable
latest: v1.22.12+k3s1
```

View File

@ -0,0 +1,96 @@
# Cut Release
1. Verify that the merge CI has successfully completed before cutting the RC
1. After the merge CI has completed, cut an RC by creating a release in the GitHub interface
1. the title is the version of k3s you are releasing with the rc1 subversion eg. "v1.25.0-rc1+k3s1"
1. the target should match the release branch, remember that the latest version is attached to "master"
1. no description
1. the tag should match the title
1. After the RC is cut validate that the CI for the RC passes
1. After the RC CI passes notify the release SLACK channel about the new RC
Example Full Command List (this is not a script!):
```
export SSH_MOUNT_PATH="/var/folders/...krzO/agent.452"
export GLOBAL_GITCONFIG_PATH="/Users/mtrachier/.gitconfig"
export GLOBAL_GIT_CONFIG_PATH="/Users/mtrachier/.gitconfig"
export OLD_K8S="v1.22.14"
export NEW_K8S="v1.22.15"
export OLD_K8S_CLIENT="v0.22.14"
export NEW_K8S_CLIENT="v0.22.15"
export OLD_K3S_VER="v1.22.14-k3s1"
export NEW_K3S_VER="v1.22.15-k3s1"
export RELEASE_BRANCH="release-1.22"
export GOPATH="/Users/mtrachier/go"
export GOVERSION="1.16.15"
export GOIMAGE="golang:1.16.15-alpine3.15"
export BUILD_CONTAINER="FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"
install -d /Users/mtrachier/go/src/github.com/kubernetes
rm -rf /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
git clone --origin upstream https://github.com/kubernetes/kubernetes.git /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
cd /Users/mtrachier/go/src/github.com/kubernetes/kubernetes
git remote add k3s-io https://github.com/k3s-io/kubernetes.git
git fetch --all --tags
# this second fetch should return no more tags pulled, this makes it easier to see pull errors
git fetch --all --tags
# rebase
rm -rf _output
git rebase --onto v1.22.15 v1.22.14 v1.22.14-k3s1~1
# validate go version
echo "GOVERSION is $(yq -e '.dependencies[] | select(.name == "golang: upstream version").version' build/dependencies.yaml)"
# generate build container
echo -e "FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk" | docker build -t golang:1.16.15-alpine3.15-dev -
# run tag.sh
# note user id is 502, I am not root user
docker run --rm -u 502 \
--mount type=tmpfs,destination=/Users/mtrachier/go/pkg \
-v /Users/mtrachier/go/src:/go/src \
-v /Users/mtrachier/go/.cache:/go/.cache \
-v /Users/mtrachier/.gitconfig:/go/.gitconfig \
-e HOME=/go \
-e GOCACHE=/go/.cache \
-w /go/src/github.com/kubernetes/kubernetes golang:1.16.15-alpine3.15-dev ./tag.sh v1.22.15-k3s1 2>&1 | tee ~/tags-v1.22.15-k3s1.log
# generate and run push.sh, make sure to paste in the tag.sh output below
vim push.sh
chmod +x push.sh
./push.sh
install -d /Users/mtrachier/go/src/github.com/k3s-io
rm -rf /Users/mtrachier/go/src/github.com/k3s-io/k3s
git clone --origin upstream https://github.com/k3s-io/k3s.git /Users/mtrachier/go/src/github.com/k3s-io/k3s
cd /Users/mtrachier/go/src/github.com/k3s-io/k3s
git checkout -B v1.22.15-k3s1 upstream/release-1.22
git clean -xfd
# note that sed has different parameters on MacOS than Linux
# also note that zsh is the default MacOS shell and is not bash/dash (the default Linux shells)
sed -Ei '' "\|github.com/k3s-io/kubernetes| s|v1.22.14-k3s1|v1.22.15-k3s1|" go.mod
git diff
sed -Ei '' "s/k8s.io\/kubernetes v.*$/k8s.io\/kubernetes v1.22.15/" go.mod
git diff
sed -Ei '' "s/v0.22.14/v0.22.15/g" go.mod
git diff
go mod tidy
# make sure go version is updated in all locations
vim .github/workflows/integration.yaml
vim .github/workflows/unitcoverage.yaml
vim Dockerfile.dapper
vim Dockerfile.manifest
vim Dockerfile.test
git commit --all --signoff -m "Update to v1.22.15"
git remote add origin https://github.com/matttrach/k3s-1.git
git push --set-upstream origin v1.22.15-k3s1
# use link to generate pull request, make sure your target is the proper release branch 'release-1.22'
```

View File

@ -0,0 +1,5 @@
# Generate Milestones
If no milestones exist in the k3s repo for the releases, generate them.
No due date or description necessary, we can update them as necessary afterwards.
Make sure to post the new milestones in the SLACK channel if generated.

View File

@ -0,0 +1,76 @@
# Generate Pull Request
We update the go.mod in k3s to point to the new modules, and submit the change for review.
1. make sure git is clean before making changes
1. make sure your origin is up to date before making changes
1. checkout a new branch for the new k3s version in the local copy using the formal semantic name eg. "v1.25.1-k3s1"
1. replace any instances of the old k3s version eg. "v1.25.0-k3s1" with the new k3s version eg. "v1.25.1-k3s1" in k3,s-io module links
1. replace any instances of the old Kubernetes version eg. "v1.25.0" with the new Kubernetes version eg. "v1.25.1"
1. replace any instances of the old Kubernetes client-go version eg. "v0.25.0" with the new version eg. "v0.25.1"
1. sed commands make this process easier (this is not a script):
1. Linux example:
```
sed -Ei "\|github.com/k3s-io/kubernetes| s|${OLD_K3S_VER}|${NEW_K3S_VER}|" go.mod
sed -Ei "s/k8s.io\/kubernetes v\S+/k8s.io\/kubernetes ${NEW_K8S}/" go.mod
sed -Ei "s/$OLD_K8S_CLIENT/$NEW_K8S_CLIENT/g" go.mod
```
1. Mac example:
```
# note that sed has different parameters on MacOS than Linux
# also note that zsh is the default MacOS shell and is not bash/dash (the default Linux shells)
sed -Ei '' "\|github.com/k3s-io/kubernetes| s|${OLD_K3S_VER}|${NEW_K3S_VER}|" go.mod
git diff
sed -Ei '' "s/k8s.io\/kubernetes v.*$/k8s.io\/kubernetes ${NEW_K8S}/" go.mod
git diff
sed -Ei '' "s/${OLD_K8S_CLIENT}/${NEW_K8S_CLIENT}/g" go.mod
git diff
go mod tidy
git diff
```
1. update extra places to make sure the go version is correct
1. `.github/workflows/integration.yaml`
1. `.github/workflows/unitcoverage.yaml`
1. `Dockerfile.dapper`
1. `Dockerfile.manifest`
1. `Dockerfile.test`
1. commit the changes and push to your origin
1. make sure to sign your commits
1. make sure to push to "origin" not "upstream", be explicit in your push commands
1. example: 'git push -u origin v1.25.1-k3s1'
1. the git output will include a link to generate a pull request, use it
1. make sure the PR is against the proper release branch
1. generating the PR starts several CI processes, most are in GitHub actions, but some one is in Drone, post the link to the drone CI run in the PR
1. this keeps everyone on the same page
1. if there is an error in the CI, make sure to note that and what the errors are for reviewers
1. finding error messages:
1. example: https://drone-pr.k3s.io/k3s-io/k3s/4744
1. click the "show all logs" to see all of the logs
1. search for " failed." this will find a line like "Test bEaiAq failed."
1. search for "err=" and look for a log with the id "bEaiAq" in it
1. example error:
```
#- Tail: /tmp/bEaiAq/agents/1/logs/system.log
[LATEST-SERVER] E0921 19:16:55.430977 57 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs
[LATEST-SERVER] I0921 19:16:55.431186 57 proxier.go:667] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
```
1. the first part of the log gives a hint to the log level: "E0921" is an error log "I0921" is an info log
1. you can also look for "Summarizing \d Failure" (I installed a plugin on my browser to get regex search: "Chrome Regex Search")
1. example error:
```
[Fail] [sig-network] DNS [It] should support configurable pod DNS nameservers [Conformance]
```
1. example PR: https://github.com/k3s-io/k3s/pull/6164
1. many errors are flakey/transitive, it is usually a good idea to simply retry the CI on the first failure
1. if the same error occurs multiple times then it is a good idea to escalate to the team
1. After the CI passes (or the team dismisses the CI as "flakey"), and you have at least 2 approvals you can merge it
1. make sure you have 2 approvals on the latest changes
1. make sure the CI passes or the team approves merging without it passing
1. make sure the use the "squash and merge" option in GutHub
1. make sure to update the SLACK channel with the new Publish/Merge CI
- Help! My memory usage is off the charts and everything has slowed to a crawl!
- I found rebooting after running tag.sh was the only way to solve this problem, seems like a memory leak in VSCode on Mac or maybe some weird behavior between all of the added/removed files along with VSCode's file parser, the Crowdstrike virus scanner, and Docker (my top memory users)

View File

@ -0,0 +1,18 @@
# Rebase
1. clear out any cached or old files: `git add -A; git reset --hard HEAD`
1. clear out any cached or older outputs: `rm -rf _output`
1. rebase your local copy to move the old k3s tag from the old k8s tag to the new k8s tag
1. so there are three copies of the code involved in this process:
1. the upstream kubernetes/kubernets copy on GitHub
1. the k3s-io/kubernetes copy on GitHub
1. and the local copy on your laptop which is a merge of those
1. the local copy has every branch and every tag from the remotes you have added
1. there are custom/proprietary commits in the k3s-io copy that are not in the kubernetes copy
1. there are commits in the kubernetes copy do not exist in the k3s-io copy
1. we want the new commits added to the kubernetes copy to be in the k3s-io copy
1. we want the custom/proprietary commits from the k3s-io copy on top of the new kubernetes commits
1. before rebase our local copy has all of the commits, but the custom/proprietary k3s-io commits are between the old kubernetes version and the new kubernetes version
1. after the rebase our local copy will have the k3s-io custom/proprietary commits after the latest kubernetes commits
1. `git rebase --onto $NEW_K8S $OLD_K8S $OLD_K3S_VER~1`
1. After rebase you will be in a detached head state, this is normal

View File

@ -0,0 +1,36 @@
# Create Release Images
## Create System Agent Installer Images
The k3s-io/k3s Release CI should dispatch the rancher/system-agent-installer-k3s repo, generating a tag there and triggering the CI to build images.
The system-agent-installer-k3s repository is used with Rancher v2prov system.
This often fails! Check the CI and if it was not triggered do the following:
After RCs are cut you need to manually release the system agent installer k3s, this along with KDM PR allows QA to fully test RCs.
This should happen directly after the KDM PR is generated, within a few hours of the release candidate being cut.
These images depend on the release artifact and can not be generated until after the k3s-io/k3s release CI completes.
1. Create a release in the system-agent-installer-k3s repo
1. it should exactly match the release title in the k3s repo
1. the target is "main" for all releases (no branches)
1. no description
1. make sure to check the "pre-release" checkbox
1. Watch the Drone Publish CI, it should be very quick
1. Verify that the new images appear in Docker hub
## Create K3S Upgrade Images
The k3s-io/k3s Release CI should dispatch the k3s-io/k3s-upgrade repo, generating a tag there and triggering the CI to build images.
These images depend on the release artifact and can not be generated until after the k3s-io/k3s release CI completes.
This sometimes fails! Check the CI and if it was not triggered do the following:
1. Create a release in the system-agent-installer-k3s repo
1. it should exactly match the release title in the k3s repo
1. the target is "main" for all releases (no branches)
1. no description
1. make sure to check the "pre-release" checkbox
1. Watch the Drone Publish CI, it should be very quick
1. Verify that the new images appear in Docker hub
Make sure you are in constant communication with QA during this time so that you can cut more RCs if necessary,
update KDM if necessary, radiate information to the rest of the team and help them in any way possible.

View File

@ -0,0 +1,92 @@
# Create Release Notes PR
1. Use the release notes tool to generate the release notes
1. the release notes tool is located in [ecm distro tools](https://github.com/rancher/ecm-distro-tools)
1. you will need a valid GitHub token to use the tool
1. call the tools as follows (example using v1.23.13-rc1+k3s1):
```
# this outputs to stdout
export GHT=$GITHUB_TOKEN
export PREVIOUS_RELEASE='v1.23.12+k3s1'
export LAST_RELEASE='v1.23.13-rc2+k3s1'
docker run --rm -e GITHUB_TOKEN=$GHT rancher/ecm-distro-tools:latest gen_release_notes -r k3s -m $LAST_RELEASE -p $PREVIOUS_RELEASE
```
1. Update the first line to include the semver of the released version
- example: `<!-- v1.25.3+k3s1 -->`
1. Make sure the title has the new k8s version, and the "changes since" line has the old version number
- example title: `This release updates Kubernetes to v1.25.3, and fixes a number of issues.`
- example "changes since": `## Changes since v1.25.2+k3s1`
1. Verify changes
1. go to releases
1. find the previous release
1. calculate the actual date from "XX days ago"
1. search for pull requests which merged after that date
1. go to the GitHub issue search UI
1. search for PRs for the proper branch, merged after the last release
- release branch is release-1.23
- previous release was v1.23.12
- date of the release was "Sept 28th 2022"
- example search `is:pr base:release-1.23 merged:>2022-09-28 sort:created-asc`
1. for each PR, validate the title of the pr and the commit message comments
- each PR title (or 'release note' section of the first comment) should get an entry
- the entry with this item should have a link at the end to the PR
- the commit messages for the PR should follow, until the next PR
1. if you suspect there is a missing/extra commit, compare the tags
- use the github compare tool to compare the older tag to the newer one
- example: `https://github.com/k3s-io/k3s/compare/v1.25.2+k3s1...v1.25.3-rc2+k3s1`
- this will show all of the commit differences between the two, exposing all of the new commits
- on the commit page you should see the merge issue associated with it
- validate that the merge issue is listed in the release notes
- if the commit is not in the comparison, try comparing the previous release tags
- example: `https://github.com/k3s-io/k3s/compare/v1.25.0+k3s1...v1.25.2+k3s1`
- the commit's merge issue should be listed in the release notes
1. if you are adding backports, make sure you are using the backport issues, not the one for master
1. Verify component release versions
- the list of components is completely static, someone should say something in the PR if we need to add to the list
- Kubernetes, Kine, SQLite, Etcd, Containerd, Runc, Flannel, Metrics-server, Traefik, CoreDNS, Helm-controller, Local-path-provisioner
- the version.sh script found in the k3s repo at scripts/version.sh is the source of truth for version information
1. go to [the k3s repo](https://github.com/k3s-io/k3s) and browse the release tag for the notes you are verifying, [example](https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1)
1. start by searching the version.sh file for the component
1. if you do not find anything, search the build script found in ./scripts/build [example](https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/scripts/build)
1. if you still do not find anything, search the go.mod found in the root of the k3s repo [example](https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/go.mod#L93)
1. some things are in the k3s repo's manifests directory, see ./manifests [example](https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/manifests/local-storage.yaml#L66)
- example info for v1.23.13-rc2+k3s1
```
kubernetes: version.sh pulls from k3s repo go.mod see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/scripts/version.sh#L35
kine: go.mod, see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/go.mod#L93
sqlite: go.mod, see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/go.mod#L97
etcd: go.mod, use the /api/v3 mod, see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/go.mod#L25
containerd: version.sh sets an env variable based on go.mod, then the build script builds it
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/scripts/version.sh#L25
and https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/scripts/build#L36
runc: set in the version.sh
this one is weird, it ignores the go.mod, preferring the version.sh instead
the version.sh sets an env variable which is picked up by the download script
the build script runs 'make' on whatever was downloaded
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/scripts/version.sh#L40
and https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/scripts/download#L29
and https://github.com/k3s-io/k3s/blob/master/scripts/build#L138
flannel: version.sh sets an env variable based on go.mod, then the build script builds it
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/go.mod#L83
metrics-server: version is set in the manifest at manifests/metric-server
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2+k3s1/manifests/metrics-server/metrics-server-deployment.yaml#L42
traefik: version is set in the manifest at manifests/traefik.yaml
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/manifests/traefik.yaml#L36
coredns: version is set in the manifest ar manifests/coredns.yaml
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/manifests/coredns.yaml#L122
helm-controller: go.mod, see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/go.mod#L92
local-path-provisioner: version is set in the manifest at manifests/local-storage.yaml
see https://github.com/k3s-io/k3s/blob/v1.23.13-rc2%2Bk3s1/manifests/local-storage.yaml#L66
```
## Understanding Release Notes
Here are the major sections in the release notes:
- changes since
- this relates all changes since the previous release
- more specifically every merge issue (PR) generated should have an entry
- developers may add a special "User-Facing Change" section to their PR to give custom notes
- these notes will appear as sub entries on the issue title
- released components
- this relates all kubernetes 'components' in the release
- components are generally non-core kubernetes options that we install using Helm charts

View File

@ -0,0 +1,24 @@
# Setup Go Environment
These steps are expected for using the scripts and ecm_distro tools for release.
Some of these steps are for properly setting up Go on your machine, some for Docker, and Git.
## Git
1. install Git (using any method that makes sense
1. Configure Git for working with GitHub (add your ssh key, etc)
## Go
1. install Go from binary
1. set up default Go file structure
1. create $HOME/go/src/github.com/<your user>
1. create $HOME/go/src/github.com/k3s-io
1. create $HOME/go/src/github.com/rancher
1. create $HOME/go/src/github.com/rancherlabs
1. create $HOME/go/src/github.com/kubernetes
1. set GOPATH=$HOME/go
## Docker
1. install Docker (or Docker desktop) using whatever method makes sense

View File

@ -0,0 +1,12 @@
# Set Up K3S Repos
1. make sure the $HOME/go/src/github.com/k3s-io directory exists
1. clear out (remove) k3s repo if is already there (just makes things smoother with a new clone)
1. clone k3s-io/k3s repo into that directory as "upstream"
1. fork that repo so that you have a private fork of it
1. if you already have a fork, sync it
1. add your fork repo as "origin"
1. fetch all objects from both repos into your local copy
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. this is why it is important that the repo is in the github.com/k3s-io directory, and that the repo's directory is "k3s" matching the upstream copy's name
`$HOME/go/src/github.com/k3s-io/k3s`

View File

@ -0,0 +1,9 @@
# Set Up Kubernetes Repos
1. make sure the $HOME/go/src/github.com/kubernetes directory exists
1. clear out (remove) kubernetes repo if is already there (just makes things smoother with a new clone)
1. clone kubernetes/kubernetes repo into that directory as "upstream"
1. add k3s-io/kubernetes repo as "k3s-io"
1. fetch all objects from both repos into your local copy
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. this is why it is important that the repo is in the github.com/kubernetes directory, and that the repo's directory is "kubernetes" matching the upstream copy's name `$HOME/go/src/github.com/kubernetes/kubernetes`

View File

@ -0,0 +1,50 @@
# Set Up Environment Variables
The scripts and tools involved in release require specific environment variables,
the value of these variables is not always obvious.
This guide helps you navigate the creation of those variables.
1. set GLOBAL_GIT_CONFIG_PATH environment variable (to the path of your git config, ex. '$HOME/.gitconfig'), this will be mounted into a docker container
1. set SSH_MOUNT_POINT environment variable (to the path of your SSH_AUTH_SOCK or your ssh key), this will be mounted into a docker container
1. set OLD_K8S to the previous k8s version
1. set NEW_K8S to the newly released k8s version
1. set OLD_K8S_CLIENT to the kubernetes/go-client version which corresponds with the previous k8s version
1. set NEW_K8S_CLIENT to the client version which corresponds with the newly released k8s version
1. set OLD_K3S_VER to the previous k3s version (the one which corresponds to the previous k8s version), replacing the plus symbol with a dash (eg. for "v1.25.0+k3s1" use "v1.25.0-k3s1")
1. set NEW_K3S_VER to the k3s version which corresponds to the newly released k8s version, replacing the plus symbol with a dash
1. set RELEASE_BRANCH to the the k3s release branch which corresponds to the newly released k8s version
1. set GOPATH to the path to the "go" directory (usually $HOME/go)
1. set GOVERSION to the version of go which the newly released k8s version uses
1. you can find this in the kubernetes/kubernetes repo
1. go to the release tag in the proper release branch
1. go to the build/dependencies.yaml
1. search for the "golang: upstream version" stanza and the go version is the "version" in that stanza
1. example: https://github.com/kubernetes/kubernetes/blob/v1.25.1/build/dependencies.yaml#L90-L91
1. set GOIMAGE to the go version followed by the alpine container version
1. example: "golang:1.16.15-alpine3.15"
1. the first part correlates to the go version in this example the GOVERSION would be '1.16.15'
1. the second part is usually the same "-alpine3.15"
1. the prefix the the dockerhub group where this image exists (golang)
1. set BUILD_CONTAINER to the contents of a Dockerfile to build the "build container" for generating the tags
1. the FROM line is the GOIMAGE
1. the only other line is a RUN which adds a few utilities: "bash git make tar gzip curl git coreutils rsync alpine-sdk"
1. example: BUILD_CONTAINER="FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"
1. I like to set this to a file and source it, it helps in case you need to set it again or to see what you did
1. example:
```
export SSH_MOUNT_PATH="/var/folders/m7/1d53xcj57d76n1qxv_ykgr040000gp/T//ssh-dmtrX2MOkrzO/agent.45422"
export GLOBAL_GITCONFIG_PATH="/Users/mtrachier/.gitconfig"
export GLOBAL_GIT_CONFIG_PATH="/Users/mtrachier/.gitconfig"
export OLD_K8S="v1.22.13"
export NEW_K8S="v1.22.14"
export OLD_K8S_CLIENT="v0.22.13"
export NEW_K8S_CLIENT="v0.22.14"
export OLD_K3S_VER="v1.22.13-k3s1"
export NEW_K3S_VER="v1.22.14-k3s1"
export RELEASE_BRANCH="release-1.22"
export GOPATH="/Users/mtrachier/go"
export GOVERSION="1.16.15"
export GOIMAGE="golang:1.16.15-alpine3.15"
export BUILD_CONTAINER="FROM golang:1.16.15-alpine3.15\n RUN apk add --no-cache bash git make tar gzip curl git coreutils rsync alpine-sdk"
```

View File

@ -0,0 +1,21 @@
# Generate Kubernetes Tags
1. run the tag.sh script
1. the tag.sh script is in the commits that exist in the k3s-io/kubernetes copy but not the kubernetes/kubernetes copy
1. when we fetched all from both copies to our local copy we got the tag.sh
1. when we rebased our local copy the tag.sh appears in HEAD
1. the tag.sh requires a strict env to run in, which is why we generated the build container
1. we can now run the tag.sh script in the docker container
1. `docker run --rm -u $(id -u) --mount type=tmpfs,destination=${GOPATH}/pkg -v ${GOPATH}/src:/go/src -v ${GOPATH}/.cache:/go/.cache -v ${GLOBAL_GIT_CONFIG_PATH}:/go/.gitconfig -e HOME=/go -e GOCACHE=/go/.cache -w /go/src/github.com/kubernetes/kubernetes ${GOIMAGE}-dev ./tag.sh ${NEW_K3S_VER} 2>&1 | tee tags-${NEW_K3S_VER}.log`
1. the tag.sh script builds a lot of binaries and creates a commit in your name
1. this can take a while, like 45min in my case
1. the tag.sh script creates a lot of tags in the local copy
1. the "push" output from the tag.sh is a list of commands to be run
1. you should review the commits and tags that the tag.sh creates
1. always review automated commits before pushing
1. build and run the push script
1. there is a lot of output, but only about half of it are git push commands, only copy the commands to build a "push" script
1. after pasting the push commands to a file, make the file executable
1. make sure you are able to push to the k3s-io/kubernetes repo, this is where you will be pushing the tags and commits
1. make sure to set the REMOTE env variable to "k3s-io" before running the script
1. the push script pushes up the tags and commits from your local copy to the k3s-io/kubernetes copy

View File

@ -0,0 +1,75 @@
# Update KDM
After the RCs are cut you need to generate the KDM PR within a few hours
## Set up Repo
1. make sure the $HOME/go/src/github.com/rancher directory exists
1. clear out (remove) kontainer-driver-metadata repo if is already there (just makes things smoother with a new clone)
1. fork kdm repo
1. clone your fork into that directory as "origin" (you won't need a local copy of upstream)
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. go generate needs to be able to fully use Go as expected, so it is important to get the file structure correct
1. this is why it is important that the repo is in the github.com/rancher directory, and that the repo's directory is "kontainer-driver-metadata" matching the upstream copy's name
1. $HOME/go/src/github.com/rancher/kontainer-driver-metadata
1. checkout a new branch (something like "k3s-release-september")
## Update The Channels
1. Edit the "channels.yaml" file in the root of the repo
1. copy and paste the previous version's info directly below it
1. if a version was skipped, there should be a comment stating that
1. ask QA captain what the min and max channel server versions should be
1. generate the change to the channel.yaml and commit it
## Go Generate
1. Generate json data changes
1. as a separate commit, run the command `go generate`
1. this will alter the data/data.json file
1. commit this change by itself with the commit message "go generate" (exactly that message)
1. push the changes to your fork
## Squashing Your Changes
ok, so you have all the commits and you are ready to go, suddenly someone asks you to squash all the changes to the channels.yaml and the data/data.json together.
The goal is to have 2 commits, one with all the changes to channels.yaml, and one with the changes to data/data.json.
They might also ask you to rebase from the upstream branch...
1. Rebasing from upstream: `git pull --rebase upstream <branch to rebase from>` for example: `git pull --rebase upstream dev-v2.7`
1. this will pull in all of the commits from upstream's 'dev-v2.7' branch into your local copy
1. this will rebase your local copy's history on top of that pull
1. you will need to verify your files and force push your local copy to your origin copy `git push -f origin <branch name>`, for example: `git push -f origin k3s-release-september`
1. you will see all of the commits for the PR re-added as part of this process, take a note of how many commits are in the PR (needed for next step)
1. force push the rebase to your origin before moving to the next step, this will prevent a diverged head state.
1. Reset local copy: `git reset --hard HEAD~<commit number>`, for example if you had 20 commits: `git reset --hard HEAD~20`
1. this resets your local copy to the point in git history just before your first commit
1. before you reset make sure you are at the tip of HEAD (important for next step)
1. look in the history in GitHub and verify that you are at the proper commit so that you don't squash anyone else's commits into your own
1. Pull in the commits after reset and squash them in your local copy: `git merge --squash HEAD@{1}`
1. the `HEAD@{1}` is returning to where HEAD was before reset
1. this does not actually make a commit for you, it only merges the commits into a single staged but uncommitted state
1. remove the data/data.json from the staged for commit files: `git restore --staged data/data.json`
1. this does not actually restore anything, it simply moves the file from staged for commit to unstaged
1. you want to commit the channels.yaml in a separate commit from the data.json
1. commit the channels.yaml changes
1. this single commit will replace any/all of the previous commits
1. I put a message like "updating channels"
1. stage the data/data.json: `git add data/data.json`
1. this adds a new commit with just the changes to the data.json, replacing the previous commits
1. make sure the commit message is `go generate`
1. force push the changes to your origin
1. don't force push to upstream!
1. `git push -f origin <branch>` for example: `git push -f origin k3s-release-september`
## Create Pull Request
1. generate a PR against the default branch of the KDM repo
1. Add QA captain and k3s group to PR
1. Each time a new RC is cut you must update the KDM PR with the new release information
1. it can be helpful to add the secondary/backup release captain to your fork so that they can also update the PR if necessary
If a PR already exists, add the new commits to the PR rather than generating a new one.
In some cases you may need to generate two PRs, ask the QA lead.
For example, currently (28 Sep 2022) we generate a PR against branch dev-v2.6 and branch dev-v2.7.

53
docs/release/release.md Normal file
View File

@ -0,0 +1,53 @@
# K3S Release Process
## Setup
Set up you environment per [setup](expanded/setup_env.md).
## Generate New Tags for K3S-IO/Kubernetes Fork
1. Generate specific environment variables per [setup rc](expanded/setup_rc.md).
1. Set up Kubernetes repos per [setup k8s repos](expanded/setup_k8s_repos.md).
1. Rebase your local copy to move the old k3s tag from the old k8s tag to the new k8s tag, per [rebase](expanded/rebase.md).
1. Build a custom container for generating tags, per [build container](expanded/build_container.md).
1. Run the tag script to generate tags in the build container, per [tagging](expanded/tagging.md).
## Update K3S
We made some new tags on the k3s-io/kubernetes repo, now we need to tell k3s to use them.
1. If no milestones exist in the k3s repo for the releases, generate them, per [milestones](expanded/milestones.md).
1. Set up k3s repos per [setup k3s repos](expanded/setup_k3s_repos.md).
1. Generate a pull request to update k3s, per [generate pull request](expanded/pr.md).
## Cut Release Candidate
1. The first part of cutting a release (either an RC or a GA) is to create the release itself, per [cut release](expanded/cut_release.md).
1. Then we need to update KDM, per [update kdm](expanded/update_kdm.md).
1. We create release images, per [release images](expanded/release_images.md).
1. Then we need to update or generate the release notes, per [release notes](expanded/release_notes.md).
## Create GA Release
After QA approves the release candidates you need to cut the "GA" release.
This will be tested one more time before the release is considered ready for finalization.
Follow the processes for an RC release:
1. [Cut Release](expanded/cut_release.md)
1. [Update KDM](expanded/update_kdm.md)
1. [Create Release Images](expanded/release_images.md)
1. [Update Release Notes](expanded/release_notes.md)
Make sure you are in constant communication with QA during this time so that you can cut more RCs if necessary,
update KDM if necessary, radiate information to the rest of the team and help them in any way possible.
When QA approves the GA release you can move into the finalization phase.
## Finalization
1. Update the channel server, per [channel server](expanded/channel_server.md)
1. Copy the release notes into the release, per [release notes](expanded/release_notes.md)
1. Wait 24 hours, then uncheck the pre-release checkbox on the release.
1. Edit the release, and check the "set as latest release" checkbox on the "latest" release.
- only one release can be latest
- this will most likely be the patch for the highest/newest minor version
- check with QA for which release this should be