This file was never truly necessary and has never actually been used in the history of Tailscale's open source releases. A Brief History of AUTHORS files --- The AUTHORS file was a pattern developed at Google, originally for Chromium, then adopted by Go and a bunch of other projects. The problem was that Chromium originally had a copyright line only recognizing Google as the copyright holder. Because Google (and most open source projects) do not require copyright assignemnt for contributions, each contributor maintains their copyright. Some large corporate contributors then tried to add their own name to the copyright line in the LICENSE file or in file headers. This quickly becomes unwieldy, and puts a tremendous burden on anyone building on top of Chromium, since the license requires that they keep all copyright lines intact. The compromise was to create an AUTHORS file that would list all of the copyright holders. The LICENSE file and source file headers would then include that list by reference, listing the copyright holder as "The Chromium Authors". This also become cumbersome to simply keep the file up to date with a high rate of new contributors. Plus it's not always obvious who the copyright holder is. Sometimes it is the individual making the contribution, but many times it may be their employer. There is no way for the proejct maintainer to know. Eventually, Google changed their policy to no longer recommend trying to keep the AUTHORS file up to date proactively, and instead to only add to it when requested: https://opensource.google/docs/releasing/authors. They are also clear that: > Adding contributors to the AUTHORS file is entirely within the > project's discretion and has no implications for copyright ownership. It was primarily added to appease a small number of large contributors that insisted that they be recognized as copyright holders (which was entirely their right to do). But it's not truly necessary, and not even the most accurate way of identifying contributors and/or copyright holders. In practice, we've never added anyone to our AUTHORS file. It only lists Tailscale, so it's not really serving any purpose. It also causes confusion because Tailscalars put the "Tailscale Inc & AUTHORS" header in other open source repos which don't actually have an AUTHORS file, so it's ambiguous what that means. Instead, we just acknowledge that the contributors to Tailscale (whoever they are) are copyright holders for their individual contributions. We also have the benefit of using the DCO (developercertificate.org) which provides some additional certification of their right to make the contribution. The source file changes were purely mechanical with: git ls-files | xargs sed -i -e 's/\(Tailscale Inc &\) AUTHORS/\1 contributors/g' Updates #cleanup Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d Signed-off-by: Will Norris <will@tailscale.com>
Overview
There are quite a few ways of running Tailscale inside a Kubernetes Cluster. This doc covers creating and managing your own Tailscale node deployments in cluster. If you want a higher level of automation, easier configuration, automated cleanup of stopped Tailscale devices, or a mechanism for exposing the Kubernetes API server to the tailnet, take a look at Tailscale Kubernetes operator.
⚠️ Note that the manifests generated by the following commands are not intended for production use, and you will need to tweak them based on your environment and use case. For example, the commands to generate a standalone proxy manifest, will create a standalone Pod- this will not persist across cluster upgrades etc. ⚠️
Instructions
Setup
-
(Optional) Create the following secret which will automate login.
You will need to get an auth key from Tailscale Admin Console.
If you don't provide the key, you can still authenticate using the url in the logs.apiVersion: v1 kind: Secret metadata: name: tailscale-auth stringData: TS_AUTHKEY: tskey-... -
Tailscale (v1.16+) supports storing state inside a Kubernetes Secret.
Configure RBAC to allow the Tailscale pod to read/write the
tailscalesecret.export SA_NAME=tailscale export TS_KUBE_SECRET=tailscale-auth make rbac | kubectl apply -f-
Sample Sidecar
Running as a sidecar allows you to directly expose a Kubernetes pod over Tailscale. This is particularly useful if you do not wish to expose a service on the public internet. This method allows bi-directional connectivity between the pod and other devices on the Tailnet. You can use ACLs to control traffic flow.
-
Create and login to the sample nginx pod with a Tailscale sidecar
make sidecar | kubectl apply -f- # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs nginx ts-sidecar -
Check if you can to connect to nginx over Tailscale:
curl http://nginxOr, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 nginx)"
Userspace Sidecar
You can also run the sidecar in userspace mode. The obvious benefit is reducing the amount of permissions Tailscale needs to run, the downside is that for outbound connectivity from the pod to the Tailnet you would need to use either the SOCKS proxy or HTTP proxy.
-
Create and login to the sample nginx pod with a Tailscale sidecar
make userspace-sidecar | kubectl apply -f- # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs nginx ts-sidecar -
Check if you can to connect to nginx over Tailscale:
curl http://nginxOr, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 nginx)"
Sample Proxy
Running a Tailscale proxy allows you to provide inbound connectivity to a Kubernetes Service.
-
Provide the
ClusterIPof the service you want to reach by either:Creating a new deployment
kubectl create deployment nginx --image nginx kubectl expose deployment nginx --port 80 export TS_DEST_IP="$(kubectl get svc nginx -o=jsonpath='{.spec.clusterIP}')"Using an existing service
export TS_DEST_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')" -
Deploy the proxy pod
make proxy | kubectl apply -f- # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs proxy -
Check if you can to connect to nginx over Tailscale:
curl http://proxyOr, if you have MagicDNS disabled:
curl "http://$(tailscale ip -4 proxy)"
Subnet Router
Running a Tailscale subnet router allows you to access the entire Kubernetes cluster network (assuming NetworkPolicies allow) over Tailscale.
-
Identify the Pod/Service CIDRs that cover your Kubernetes cluster. These will vary depending on which CNI you are using and on the Cloud Provider you are using. Add these to the
TS_ROUTESvariable as comma-separated values.SERVICE_CIDR=10.20.0.0/16 POD_CIDR=10.42.0.0/15 export TS_ROUTES=$SERVICE_CIDR,$POD_CIDR -
Deploy the subnet-router pod.
make subnet-router | kubectl apply -f- # If not using an auth key, authenticate by grabbing the Login URL here: kubectl logs subnet-router -
In the Tailscale admin console, ensure that the routes for the subnet-router are enabled.
-
Make sure that any client you want to connect from has
--accept-routesenabled. -
Check if you can connect to a
ClusterIPor aPodIPover Tailscale:# Get the Service IP INTERNAL_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')" # or, the Pod IP # INTERNAL_IP="$(kubectl get po <POD_NAME> -o=jsonpath='{.status.podIP}')" INTERNAL_PORT=8080 curl http://$INTERNAL_IP:$INTERNAL_PORT
Multiple replicas
Note that if you want to use the Pod manifests generated by the commands above in a multi-replica setup (i.e a multi-replica StatefulSet) you will need to change the mechanism for storing tailscale state to ensure that multiple replicas are not attemting to use a single Kubernetes Secret to store their individual states.
To avoid proxy state clashes you could either store the state in memory or an emptyDir volume, or you could change the provided state Secret name to ensure that a unique name gets generated for each replica.
Option 1: storing in an emptyDir
You can mount an emptyDir volume and configure the mount as the tailscale state store via TS_STATE_DIR env var.
You must also set TS_KUBE_SECRET to an empty string.
An example:
kind: StatefulSet
metadata:
name: subnetrouter
spec:
replicas: 2
...
template:
...
spec:
...
volumes:
- name: tsstate
emptyDir: {}
containers:
- name: tailscale
env:
- name: TS_STATE_DIR
value: /tsstate
- name: TS_KUBE_SECRET
value: ""
volumeMounts:
- name: tsstate
mountPath: /tsstate
The downside of this approach is that the state will be lost when a Pod is
deleted. In practice this means that when you, for example, upgrade proxy
versions you will get a new set of Tailscale devices with different hostnames.
Option 2: dynamically generating unique Secret names
If you run the proxy as a StatefulSet, the Pods get stable identifiers.
You can use that to pass an individual, static state Secret name to each proxy:
kind: StatefulSet
metadata:
name: subnetrouter
spec:
replicas: 2
...
template:
...
spec:
...
containers:
- name: tailscale
env:
- name: TS_KUBE_SECRET
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
In this case, each replica will store its state in a Secret named the same as the Pod and as Pod names for a StatefulSet do not change if Pods get recreated, proxy state will persist across cluster and proxy version updates etc.