I omitted a lot of the min/max modernizers because they didn't
result in more clear code.
Some of it's older "for x := range 123".
Also: errors.AsType, any, fmt.Appendf, etc.
Updates #18682
Change-Id: I83a451577f33877f962766a5b65ce86f7696471c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
considerable latency was seen when using k8s-proxy with ProxyGroup
in the kubernetes operator. Switching to L4 TCPForward solves this.
Fixes tailscale#18171
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Co-authored-by: chaosinthecrd <tom@tmlabs.co.uk>
Remove the TS_EXPERIMENTAL_KUBE_API_EVENTS env var from the operator and its
helm chart. This has already been marked as deprecated, and has been
scheduled to be removed in release 1.96.
Add a check in helm chart to fail if the removed variable is set to true,
prompting users to move to ACLs instead.
Fixes: #18875
Signed-off-by: Becky Pauley <becky@tailscale.com>
This commit implements a reconciler for the new `ProxyGroupPolicy`
custom resource. When created, all `ProxyGroupPolicy` resources
within the same namespace are merged into two `ValidatingAdmissionPolicy`
resources, one for egress and one for ingress.
These policies use CEL expressions to limit the usage of the
"tailscale.com/proxy-group" annotation on `Service` and `Ingress`
resources on create & update.
Included here is also a new e2e test that ensures that resources that
violate the policy return an error on creation, and that once the
policy is changed to allow them they can be created.
Closes: https://github.com/tailscale/corp/issues/36830
Signed-off-by: David Bond <davidsbond93@gmail.com>
This commit adds a new custom resource definition to the kubernetes
operator named `ProxyGroupPolicy`. This resource is namespace scoped
and is used as an allow list for which `ProxyGroup` resources can be
used within its namespace.
The `spec` contains two fields, `ingress` and `egress`. These should
contain the names of `ProxyGroup` resources to denote which can be
used as values in the `tailscale.com/proxy-group` annotation within
`Service` and `Ingress` resources.
The intention is for these policies to be merged within a namespace and
produce a `ValidatingAdmissionPolicy` and `ValidatingAdmissionPolicyBinding`
for both ingress and egress that prevents users from using names of
`ProxyGroup` resources in those annotations.
Closes: https://github.com/tailscale/corp/issues/36829
Signed-off-by: David Bond <davidsbond93@gmail.com>
This file was never truly necessary and has never actually been used in
the history of Tailscale's open source releases.
A Brief History of AUTHORS files
---
The AUTHORS file was a pattern developed at Google, originally for
Chromium, then adopted by Go and a bunch of other projects. The problem
was that Chromium originally had a copyright line only recognizing
Google as the copyright holder. Because Google (and most open source
projects) do not require copyright assignemnt for contributions, each
contributor maintains their copyright. Some large corporate contributors
then tried to add their own name to the copyright line in the LICENSE
file or in file headers. This quickly becomes unwieldy, and puts a
tremendous burden on anyone building on top of Chromium, since the
license requires that they keep all copyright lines intact.
The compromise was to create an AUTHORS file that would list all of the
copyright holders. The LICENSE file and source file headers would then
include that list by reference, listing the copyright holder as "The
Chromium Authors".
This also become cumbersome to simply keep the file up to date with a
high rate of new contributors. Plus it's not always obvious who the
copyright holder is. Sometimes it is the individual making the
contribution, but many times it may be their employer. There is no way
for the proejct maintainer to know.
Eventually, Google changed their policy to no longer recommend trying to
keep the AUTHORS file up to date proactively, and instead to only add to
it when requested: https://opensource.google/docs/releasing/authors.
They are also clear that:
> Adding contributors to the AUTHORS file is entirely within the
> project's discretion and has no implications for copyright ownership.
It was primarily added to appease a small number of large contributors
that insisted that they be recognized as copyright holders (which was
entirely their right to do). But it's not truly necessary, and not even
the most accurate way of identifying contributors and/or copyright
holders.
In practice, we've never added anyone to our AUTHORS file. It only lists
Tailscale, so it's not really serving any purpose. It also causes
confusion because Tailscalars put the "Tailscale Inc & AUTHORS" header
in other open source repos which don't actually have an AUTHORS file, so
it's ambiguous what that means.
Instead, we just acknowledge that the contributors to Tailscale (whoever
they are) are copyright holders for their individual contributions. We
also have the benefit of using the DCO (developercertificate.org) which
provides some additional certification of their right to make the
contribution.
The source file changes were purely mechanical with:
git ls-files | xargs sed -i -e 's/\(Tailscale Inc &\) AUTHORS/\1 contributors/g'
Updates #cleanup
Change-Id: Ia101a4a3005adb9118051b3416f5a64a4a45987d
Signed-off-by: Will Norris <will@tailscale.com>
This commit contains the implementation of multi-tailnet support within the Kubernetes Operator
Each of our custom resources now expose the `spec.tailnet` field. This field is a string that must match the name of an existing `Tailnet` resource. A `Tailnet` resource looks like this:
```yaml
apiVersion: tailscale.com/v1alpha1
kind: Tailnet
metadata:
name: example # This is the name that must be referenced by other resources
spec:
credentials:
secretName: example-oauth
```
Each `Tailnet` references a `Secret` resource that contains a set of oauth credentials. This secret must be created in the same namespace as the operator:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: example-oauth # This is the name that's referenced by the Tailnet resource.
namespace: tailscale
stringData:
client_id: "client-id"
client_secret: "client-secret"
```
When created, the operator performs a basic check that the oauth client has access to all required scopes. This is done using read actions on devices, keys & services. While this doesn't capture a missing "write" permission, it catches completely missing permissions. Once this check passes, the `Tailnet` moves into a ready state and can be referenced. Attempting to use a `Tailnet` in a non-ready state will stall the deployment of `Connector`s, `ProxyGroup`s and `Recorder`s until the `Tailnet` becomes ready.
The `spec.tailnet` field informs the operator that a `Connector`, `ProxyGroup`, or `Recorder` must be given an auth key generated using the specified oauth client. For backwards compatibility, the set of credentials the operator is configured with are considered the default. That is, where `spec.tailnet` is not set, the resource will be deployed in the same tailnet as the operator.
Updates https://github.com/tailscale/corp/issues/34561
* k8s-operator,kube: removing enableSessionRecordings option. It seems
like it is going to create a confusing user experience and it's going to
be a very niche use case, so we have decided to defer this for now.
Updates tailscale/corp#35796
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
* k8s-operator: adding metric for env var deprecation
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
---------
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
This commit adds the `spec.replicas` field to the `Recorder` custom
resource that allows for a highly available deployment of `tsrecorder`
within a kubernetes cluster.
Many changes were required here as the code hard-coded the assumption
of a single replica. This has required a few loops, similar to what we
do for the `Connector` resource to create auth and state secrets. It
was also required to add a check to remove dangling state and auth
secrets should the recorder be scaled down.
Updates: https://github.com/tailscale/tailscale/issues/17965
Signed-off-by: David Bond <davidsbond93@gmail.com>
This commit modifies the k8s-operator's api proxy implementation to only
enable forwarding of api requests to tsrecorder when an environment
variable is set.
This new environment variable is named `TS_EXPERIMENTAL_KUBE_API_EVENTS`.
Updates https://github.com/tailscale/corp/issues/32448
Signed-off-by: David Bond <davidsbond93@gmail.com>
The hijacker on k8s-proxy's reverse proxy is used to stream recordings
to tsrecorder as they pass through the proxy to the kubernetes api
server. The connection to the recorder was using the client's
(e.g., kubectl) context, rather than a dedicated one. This was causing
the recording stream to get cut off in scenarios where the client
cancelled the context before streaming could be completed.
By using a dedicated context, we can continue streaming even if the
client cancels the context (for example if the client request
completes).
Fixes#17404
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
DNS configuration support to ProxyClass, allowing users to customize DNS resolution for Tailscale proxy pods.
Fixes#16886
Signed-off-by: Raj Singh <raj@tailscale.com>
This commit modifies the `DNSConfig` custom resource to allow specifying
a replica count when deploying a nameserver. This allows deploying
nameservers in a HA configuration.
Updates https://github.com/tailscale/corp/issues/32589
Signed-off-by: David Bond <davidsbond93@gmail.com>
This change adds full IPv6 support to the Kubernetes operator's DNS functionality,
enabling dual-stack and IPv6-only cluster support.
Fixes#16633
Signed-off-by: Raj Singh <raj@tailscale.com>
This commit adds a `replicas` field to the `Connector` custom resource that
allows users to specify the number of desired replicas deployed for their
connectors.
This allows users to deploy exit nodes, subnet routers and app connectors
in a highly available fashion.
Fixes#14020
Signed-off-by: David Bond <davidsbond93@gmail.com>
The serve code leaves it up to the system's DNS resolver and netstack to
figure out how to reach the proxy destination. Combined with k8s-proxy
running in userspace mode, this means we can't rely on MagicDNS being
available or tailnet IPs being routable. I'd like to implement that as a
feature for serve in userspace mode, but for now the safer fix to get
kube-apiserver ProxyGroups consistently working in all environments is to
switch to using localhost as the proxy target instead.
This has a small knock-on in the code that does WhoIs lookups, which now
needs to check the X-Forwarded-For header that serve populates to get
the correct tailnet IP to look up, because the request's remote address
will be loopback.
Fixes#16920
Change-Id: I869ddcaf93102da50e66071bb00114cc1acc1288
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
This occasionally panics waiting on a nil ctx, but was missed in the
previous PR because it's quite a rare flake as it needs to progress to a
specific point in the parser.
Updates #16678
Change-Id: Ifd36dfc915b153aede36b8ee39eff83750031f95
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
When kubectl starts an interactive attach session, it sends 2 resize
messages in quick succession. It seems that particularly in HTTP mode,
we often receive both of these WebSocket frames from the underlying
connection in a single read. However, our parser currently assumes 0-1
frames per read, and leaves the second frame in the read buffer until
the next read from the underlying connection. It doesn't take long after
that before we end up failing to skip a control message as we normally
should, and then we parse a control message as though it will have a
stream ID (part of the Kubernetes protocol) and error out.
Instead, we should keep parsing frames from the read buffer for as long
as we're able to parse complete frames, so this commit refactors the
messages parsing logic into a loop based on the contents of the read
buffer being non-empty.
k/k staging/src/k8s.io/kubectl/pkg/cmd/attach/attach.go for full
details of the resize messages.
There are at least a couple more multiple-frame read edge cases we
should handle, but this commit is very conservatively fixing a single
observed issue to make it a low-risk candidate for cherry picking.
Updates #13358
Change-Id: Iafb91ad1cbeed9c5231a1525d4563164fc1f002f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Updates k8s-proxy's config so its auth mode config matches that we set
in kube-apiserver ProxyGroups for consistency.
Updates #13358
Change-Id: I95e29cec6ded2dc7c6d2d03f968a25c822bc0e01
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
This commit modifies the kubernetes operator's `DNSConfig` resource
with the addition of a new field at `nameserver.service.clusterIP`.
This field allows users to specify a static in-cluster IP address of
the nameserver when deployed.
Fixes#14305
Signed-off-by: David Bond <davidsbond93@gmail.com>
Adds a new reconciler for ProxyGroups of type kube-apiserver that will
provision a Tailscale Service for each replica to advertise. Adds two
new condition types to the ProxyGroup, TailscaleServiceValid and
TailscaleServiceConfigured, to post updates on the state of that
reconciler in a way that's consistent with the service-pg reconciler.
The created Tailscale Service name is configurable via a new ProxyGroup
field spec.kubeAPISserver.ServiceName, which expects a string of the
form "svc:<dns-label>".
Lots of supporting changes were needed to implement this in a way that's
consistent with other operator workflows, including:
* Pulled containerboot's ensureServicesUnadvertised and certManager into
kube/ libraries to be shared with k8s-proxy. Use those in k8s-proxy to
aid Service cert sharing between replicas and graceful Service shutdown.
* For certManager, add an initial wait to the cert loop to wait until
the domain appears in the devices's netmap to avoid a guaranteed error
on the first issue attempt when it's quick to start.
* Made several methods in ingress-for-pg.go and svc-for-pg.go into
functions to share with the new reconciler
* Added a Resource struct to the owner refs stored in Tailscale Service
annotations to be able to distinguish between Ingress- and ProxyGroup-
based Services that need cleaning up in the Tailscale API.
* Added a ListVIPServices method to the internal tailscale client to aid
cleaning up orphaned Services
* Support for reading config from a kube Secret, and partial support for
config reloading, to prevent us having to force Pod restarts when
config changes.
* Fixed up the zap logger so it's possible to set debug log level.
Updates #13358
Change-Id: Ia9607441157dd91fb9b6ecbc318eecbef446e116
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
The observed generation was set to always 0 in #16429, but this had the
knock-on effect of other controllers considering ProxyGroups never ready
because the observed generation is never up to date in
proxyGroupCondition. Make sure the ProxyGroupAvailable function does not
requires the observed generation to be up to date, and add testing
coverage to catch regressions.
Updates #16327
Change-Id: I42f50ad47dd81cc2d3c3ce2cd7b252160bb58e40
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Adds a new k8s-proxy command to convert operator's in-process proxy to
a separately deployable type of ProxyGroup: kube-apiserver. k8s-proxy
reads in a new config file written by the operator, modelled on tailscaled's
conffile but with some modifications to ensure multiple versions of the
config can co-exist within a file. This should make it much easier to
support reading that config file from a Kube Secret with a stable file name.
To avoid needing to give the operator ClusterRole{,Binding} permissions,
the helm chart now optionally deploys a new static ServiceAccount for
the API Server proxy to use if in auth mode.
Proxies deployed by kube-apiserver ProxyGroups currently work the same as
the operator's in-process proxy. They do not yet leverage Tailscale Services
for presenting a single HA DNS name.
Updates #13358
Change-Id: Ib6ead69b2173c5e1929f3c13fb48a9a5362195d8
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Refactors setting status into its own top-level function to make it
easier to ensure we _always_ set the status if it's changed on every
reconcile. Previously, it was possible to have stale status if some
earlier part of the provision logic failed.
Updates #16327
Change-Id: Idab0cfc15ae426cf6914a82f0d37a5cc7845236b
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Previously, the operator checked the ProxyGroup status fields for
information on how many of the proxies had successfully authed. Use
their state Secrets instead as a more reliable source of truth.
containerboot has written device_fqdn and device_ips keys to the
state Secret since inception, and pod_uid since 1.78.0, so there's
no need to use the API for that data. Read it from the state Secret
for consistency. However, to ensure we don't read data from a
previous run of containerboot, make sure we reset containerboot's
state keys on startup.
One other knock-on effect of that is ProxyGroups can briefly be
marked not Ready while a Pod is restarting. Introduce a new
ProxyGroupAvailable condition to more accurately reflect
when downstream controllers can implement flows that rely on a
ProxyGroup having at least 1 proxy Pod running.
Fixes#16327
Change-Id: I026c18e9d23e87109a471a87b8e4fb6271716a66
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
This reconciler allows users to make applications highly available at L3 by
leveraging Tailscale Virtual Services. Many Kubernetes Service's
(irrespective of the cluster they reside in) can be mapped to a
Tailscale Virtual Service, allowing access to these Services at L3.
Updates #15895
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Adds Recorder fields to configure the name and annotations of the ServiceAccount
created for and used by its associated StatefulSet. This allows the created Pod
to authenticate with AWS without requiring a Secret with static credentials,
using AWS' IAM Roles for Service Accounts feature, documented here:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.htmlFixes#15875
Change-Id: Ib0e15c0dbc357efa4be260e9ae5077bacdcb264f
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
The defaultEnv and defaultBool functions are copied over temporarily
to minimise diff. This lays the ground work for having both the operator
and the new k8s-proxy binary implement the API proxy
Updates #13358
Change-Id: Ieacc79af64df2f13b27a18135517bb31c80a5a02
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
This change introduces an Age column in the output for all custom
resources to enhance visibility into their lifecycle status.
Fixes#15499
Signed-off-by: satyampsoni <satyampsoni@gmail.com>
This adds netx.DialFunc, unifying a type we have a bazillion other
places, giving it now a nice short name that's clickable in
editors, etc.
That highlighted that my earlier move (03b47a55c7) of stuff from
nettest into netx moved too much: it also dragged along the memnet
impl, meaning all users of netx.DialFunc who just wanted netx for the
type definition were instead also pulling in all of memnet.
So move the memnet implementation netx.Network into memnet, a package
we already had.
Then use netx.DialFunc in a bunch of places. I'm sure I missed some.
And plenty remain in other repos, to be updated later.
Updates tailscale/corp#27636
Change-Id: I7296cd4591218e8624e214f8c70dab05fb884e95
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Temporarily make sure that the HA Ingress reconciler does not run,
as we do not want to release this to stable just yet.
Updates tailscale/corp#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator,k8s-operator: allow using LE staging endpoint for Ingress
Allow to optionally use LetsEncrypt staging endpoint to issue
certs for Ingress/HA Ingress, so that it is easier to
experiment with initial Ingress setup without hiting rate limits.
Updates tailscale/corp#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This change:
- reinstates the HA Ingress controller that was disabled for 1.80 release
- fixes the API calls to manage VIPServices as the API was changed
- triggers the HA Ingress reconciler on ProxyGroup changes
Updates tailscale/tailscale#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The HA Ingress functionality is not actually doing anything
valuable yet, so don't run the controller in 1.80 release yet.
Updates tailscale/tailscale#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/{containerboot,k8s-operator},kube: add preshutdown hook for egress PG proxies
This change is part of work towards minimizing downtime during update
rollouts of egress ProxyGroup replicas.
This change:
- updates the containerboot health check logic to return Pod IP in headers,
if set
- always runs the health check for egress PG proxies
- updates ClusterIP Services created for PG egress endpoints to include
the health check endpoint
- implements preshutdown endpoint in proxies. The preshutdown endpoint
logic waits till, for all currently configured egress services, the ClusterIP
Service health check endpoint is no longer returned by the shutting-down Pod
(by looking at the new Pod IP header).
- ensures that kubelet is configured to call the preshutdown endpoint
This reduces the possibility that, as replicas are terminated during an update,
a replica gets terminated to which cluster traffic is still being routed via
the ClusterIP Service because kube proxy has not yet updated routig rules.
This is not a perfect check as in practice, it only checks that the kube
proxy on the node on which the proxy runs has updated rules. However, overall
this might be good enough.
The preshutdown logic is disabled if users have configured a custom health check
port via TS_LOCAL_ADDR_PORT env var. This change throws a warnign if so and in
future setting of that env var for operator proxies might be disallowed (as users
shouldn't need to configure this for a Pod directly).
This is backwards compatible with earlier proxy versions.
Updates tailscale/tailscale#14326
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: allow users to set custom labels for the optional ServiceMonitor
Updates tailscale/tailscale#14381
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Currently this does not yet do anything apart from creating
the ProxyGroup resources like StatefulSet.
Updates tailscale/corp#24795
Signed-off-by: Irbe Krumina <irbe@tailscale.com>