Sync v2.18.0

This commit is contained in:
Andrey Meshkov
2025-12-07 18:53:54 +03:00
parent c1ba1c877a
commit 0860d38469
75 changed files with 4144 additions and 1125 deletions

View File

@@ -7,6 +7,20 @@ The format is **not** based on [Keep a Changelog][kec], since the project **does
[kec]: https://keepachangelog.com/en/1.0.0/
[sem]: https://semver.org/spec/v2.0.0.html
## AGDNS-3491 / Build 1109
- The new environment variable `CATEGORY_FILTER_ENABLED` has been added.
- The environment variable `CATEGORY_FILTER_INDEX_URL` is no longer required if `CATEGORY_FILTER_ENABLED` is set to `0`.
## AGDNS-3435 / Build 1102
- The new required environment variable `CATEGORY_FILTER_INDEX_URL` has been added.
## AGDNS-3410 / Build 1095
- The environment variable `PROFILES_CACHE_INTERVAL` has been added.
## AGDNS-3287 / Build 1081
- Profiles file cache version has been incremented to support custom block-page data.

View File

@@ -22,7 +22,7 @@ BRANCH = $${BRANCH:-$$(git rev-parse --abbrev-ref HEAD)}
GOAMD64 = v1
GOPROXY = https://proxy.golang.org|direct
GOTELEMETRY = off
GOTOOLCHAIN = go1.25.3
GOTOOLCHAIN = go1.25.5
RACE = 0
REVISION = $${REVISION:-$$(git rev-parse --short HEAD)}
VERSION = 0

View File

@@ -191,11 +191,16 @@ If you're using an OS different from Linux, you also need to make these changes:
- Remove the `interface_listeners` section.
- Remove `bind_interfaces` from the `default_dns` server configuration and replace it with `bind_addresses`.
<!--
TODO(a.garipov,e.burkov): Update the script below.
-->
```sh
env \
ADULT_BLOCKING_URL='https://raw.githubusercontent.com/ameshkov/stuff/master/DNS/adult_blocking.txt' \
BILLSTAT_URL='grpc://localhost:6062' \
BLOCKED_SERVICE_INDEX_URL='https://adguardteam.github.io/HostlistsRegistry/assets/services.json' \
CATEGORY_FILTER_INDEX_URL='https://filters.adtidy.org/dns/category/filters.json' \
CONSUL_ALLOWLIST_URL='https://raw.githubusercontent.com/ameshkov/stuff/master/DNS/consul_allowlist.json' \
CONFIG_PATH='./config.yaml' \
FILTER_INDEX_URL='https://adguardteam.github.io/HostlistsRegistry/assets/filters.json' \

View File

@@ -12,6 +12,8 @@ AdGuard DNS uses [environment variables][wiki-env] to store some of the more sen
- [`BILLSTAT_URL`](#BILLSTAT_URL)
- [`BLOCKED_SERVICE_ENABLED`](#BLOCKED_SERVICE_ENABLED)
- [`BLOCKED_SERVICE_INDEX_URL`](#BLOCKED_SERVICE_INDEX_URL)
- [`CATEGORY_FILTER_ENABLED`](#CATEGORY_FILTER_ENABLED)
- [`CATEGORY_FILTER_INDEX_URL`](#CATEGORY_FILTER_INDEX_URL)
- [`CONFIG_PATH`](#CONFIG_PATH)
- [`CONSUL_ALLOWLIST_URL`](#CONSUL_ALLOWLIST_URL)
- [`CONSUL_DNSCHECK_KV_URL`](#CONSUL_DNSCHECK_KV_URL)
@@ -144,6 +146,20 @@ The HTTP(S) URL of the blocked service index file server. See the [external HTTP
[ext-blocked]: externalhttp.md#filters-blocked-services
## <a href="#CATEGORY_FILTER_ENABLED" id="CATEGORY_FILTER_ENABLED" name="CATEGORY_FILTER_ENABLED">`CATEGORY_FILTER_ENABLED`</a>
Then set to `1`, enable the category filter. When set to `0`, disable it.
**Default:** `0`.
## <a href="#CATEGORY_FILTER_INDEX_URL" id="CATEGORY_FILTER_INDEX_URL" name="CATEGORY_FILTER_INDEX_URL">`CATEGORY_FILTER_INDEX_URL`</a>
The HTTP(S) URL or a hostless file URI (e.g. `file:///tmp/category_filters.json`) of the category filtering rule index file server. See the [external HTTP API requirements section][ext-category-lists] on the expected format of the response.
**Default:** No default value, the variable is required if `CATEGORY_FILTER_ENABLED` is set to `1`.
[ext-category-lists]: externalhttp.md#category-filters-lists
## <a href="#CONFIG_PATH" id="CONFIG_PATH" name="CONFIG_PATH">`CONFIG_PATH`</a>
The path to the configuration file.
@@ -374,6 +390,14 @@ The API key to use when authenticating queries to the profiles API, if any. The
**Default:** **Unset.**
## <a href="#PROFILES_CACHE_INTERVAL" id="PROFILES_CACHE_INTERVAL" name="PROFILES_CACHE_INTERVAL">`PROFILES_CACHE_INTERVAL`</a>
The interval between profiles cache file updates, as a human-readable duration. Setting this variable to a value less than [refresh interval][conf-backend-refresh_interval] makes no sense, as the configured variable is checked only on the refresh intervals.
**Default:** No default value, the variable is **required** if `PROFILES_CACHE_PATH` is set to a non-`none` value.
[conf-backend-refresh_interval]: configuration.md#backend-refresh_interval
## <a href="#PROFILES_CACHE_PATH" id="PROFILES_CACHE_PATH" name="PROFILES_CACHE_PATH">`PROFILES_CACHE_PATH`</a>
The path to the profile cache file:

View File

@@ -16,6 +16,7 @@ AdGuard DNS uses information from external HTTP APIs for filtering and other pie
- [Consul key-value storage](#consul)
- [Filtering](#filters)
- [Blocked services](#filters-blocked-services)
- [Category filters rule lists](#category-filters-lists)
- [Filtering rule lists](#filters-lists)
- [Safe search](#filters-safe-search)
- [Proxied linked IP and dynamic DNS (DDNS) Endpoints](#backend-linkip)
@@ -99,6 +100,22 @@ This endpoint, defined by [`BLOCKED_SERVICE_INDEX_URL`][env-services], must resp
All properties must be filled with valid IDs and rules. Additional fields in objects are ignored.
### <a href="#category-filters-lists" id="category-filters-lists" name="category-filters-lists">Category filters rule lists</a>
This endpoint, defined by [`CATEGORY_FILTER_INDEX_URL`][env-category-filters], must respond with a `200 OK` response code and a JSON document in the following format:
```json
{
"filters": {
"my_category_name": {
"downloadUrl": "https://cdn.example.com/assets/category/my_category_name.txt"
}
}
}
```
All properties must be filled with valid IDs and URLs. Additional fields in objects are ignored.
### <a href="#filters-lists" id="filters-lists" name="filters-lists">Filtering rule lists</a>
This endpoint, defined by [`FILTER_INDEX_URL`][env-filters], must respond with a `200 OK` response code and a JSON document in the following format:
@@ -128,10 +145,11 @@ These endpoints, defined by [`GENERAL_SAFE_SEARCH_URL`][env-general] and [`YOUTU
|youtubei.googleapis.com^$dnsrewrite=NOERROR;CNAME;restrictmoderate.youtube.com
```
[env-filters]: environment.md#FILTER_INDEX_URL
[env-general]: environment.md#GENERAL_SAFE_SEARCH_URL
[env-services]: environment.md#BLOCKED_SERVICE_INDEX_URL
[env-youtube]: environment.md#YOUTUBE_SAFE_SEARCH_URL
[env-category-filters]: environment.md#CATEGORY_FILTER_INDEX_URL
[env-filters]: environment.md#FILTER_INDEX_URL
[env-general]: environment.md#GENERAL_SAFE_SEARCH_URL
[env-services]: environment.md#BLOCKED_SERVICE_INDEX_URL
[env-youtube]: environment.md#YOUTUBE_SAFE_SEARCH_URL
<!--
TODO(a.garipov): Replace with a link to the new KB when it is finished.

View File

@@ -58,6 +58,8 @@ Property names have been chosen to be single-letter but still have mnemonic rule
- `blocked_service`: the request was blocked by the service blocker. The property `m` contains the ID of that blocked service.
- `category`: the request was filtered by a category filter.
- `custom`: the request was filtered by a custom profile rule.
- `general_safe_search`: the request was modified by the general safe search filter.
@@ -68,7 +70,13 @@ Property names have been chosen to be single-letter but still have mnemonic rule
- `youtube_safe_search`: the request was modified by the YouTube safe search filter.
- <a href="#properties-m" id="properties-m" name="properties-m">`m`</a>: The text of the first rule that matched this query or the ID of the blocked service, if the ID of the filtering rule list is `blocked_service`. If no rules matched, this property is omitted. The short name `m` stands for “match”.
- <a href="#properties-m" id="properties-m" name="properties-m">`m`</a>: The text of the first rule that matched this query. If no rules matched, this property is omitted. The short name `m` stands for “match”.
The special cases are:
- It contains the ID of the blocked service, if the request was blocked by the service blocker.
- It contains the ID of the category, if the request was filtered by a category filter.
**Object examples:**

56
go.mod
View File

@@ -1,36 +1,36 @@
module github.com/AdguardTeam/AdGuardDNS
go 1.25.3
go 1.25.5
require (
// NOTE: Do not change the pseudoversion.
github.com/AdguardTeam/AdGuardDNS/internal/dnsserver v0.0.0-00010101000000-000000000000
github.com/AdguardTeam/golibs v0.35.2
github.com/AdguardTeam/golibs v0.35.3
github.com/AdguardTeam/urlfilter v0.22.1
github.com/ameshkov/dnscrypt/v2 v2.4.0
github.com/axiomhq/hyperloglog v0.2.5
github.com/bluele/gcache v0.0.2
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
github.com/caarlos0/env/v7 v7.1.0
github.com/getsentry/sentry-go v0.36.0
github.com/getsentry/sentry-go v0.36.2
github.com/gomodule/redigo v1.9.3
github.com/google/go-cmp v0.7.0
github.com/google/renameio/v2 v2.0.0
github.com/google/renameio/v2 v2.0.1
github.com/miekg/dns v1.1.68
github.com/oschwald/maxminddb-golang v1.13.1
github.com/patrickmn/go-cache v2.1.1-0.20191004192108-46f407853014+incompatible
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/client_model v0.6.2
github.com/prometheus/common v0.67.1
github.com/quic-go/quic-go v0.55.0
github.com/prometheus/common v0.67.2
github.com/quic-go/quic-go v0.56.0
github.com/stretchr/testify v1.11.1
github.com/viktordanov/golang-lru v0.5.6
go.yaml.in/yaml/v4 v4.0.0-rc.2
golang.org/x/crypto v0.43.0
golang.org/x/net v0.46.0
golang.org/x/sys v0.37.0
golang.org/x/time v0.13.0
google.golang.org/grpc v1.76.0
go.yaml.in/yaml/v4 v4.0.0-rc.3
golang.org/x/crypto v0.45.0
golang.org/x/net v0.47.0
golang.org/x/sys v0.38.0
golang.org/x/time v0.14.0
google.golang.org/grpc v1.77.0
google.golang.org/protobuf v1.36.10
)
@@ -40,7 +40,7 @@ require (
cloud.google.com/go/compute/metadata v0.9.0 // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/ameshkov/dnsstamps v1.0.3 // indirect
github.com/anthropics/anthropic-sdk-go v1.14.0 // indirect
github.com/anthropics/anthropic-sdk-go v1.19.0 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/caarlos0/env/v11 v11.3.1 // indirect
@@ -55,7 +55,7 @@ require (
github.com/golangci/misspell v0.7.0 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/gookit/color v1.6.0 // indirect
github.com/gordonklaus/ineffassign v0.2.0 // indirect
@@ -66,7 +66,7 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/panjf2000/ants/v2 v2.11.3 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/procfs v0.17.0 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
@@ -83,24 +83,24 @@ require (
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b // indirect
golang.org/x/exp/typeparams v0.0.0-20251017212417-90e834f514db // indirect
golang.org/x/mod v0.29.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/telemetry v0.0.0-20251014153721-24f779f6aaef // indirect
golang.org/x/term v0.36.0 // indirect
golang.org/x/text v0.30.0 // indirect
golang.org/x/tools v0.38.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/exp/typeparams v0.0.0-20251125195548-87e1e737ad39 // indirect
golang.org/x/mod v0.30.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/telemetry v0.0.0-20251128220624-abf20d0e57ec // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/tools v0.39.0 // indirect
golang.org/x/vuln v1.1.4 // indirect
google.golang.org/genai v1.31.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251020155222-88f65dc88635 // indirect
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 // indirect
google.golang.org/genai v1.36.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.6.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
honnef.co/go/tools v0.6.1 // indirect
mvdan.cc/editorconfig v0.3.0 // indirect
mvdan.cc/gofumpt v0.9.1 // indirect
mvdan.cc/gofumpt v0.9.2 // indirect
mvdan.cc/sh/v3 v3.12.0 // indirect
mvdan.cc/unparam v0.0.0-20250301125049-0df0534333a4 // indirect
mvdan.cc/unparam v0.0.0-20251027182757-5beb8c8f8f15 // indirect
)
// NOTE: Keep in sync with .gitignore.

108
go.sum
View File

@@ -4,8 +4,8 @@ cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
github.com/AdguardTeam/golibs v0.35.2 h1:GVlx/CiCz5ZXQmyvFrE3JyeGsgubE8f4rJvRshYJVVs=
github.com/AdguardTeam/golibs v0.35.2/go.mod h1:p/l6tG7QCv+Hi5yVpv1oZInoatRGOWoyD1m+Ume+ZNY=
github.com/AdguardTeam/golibs v0.35.3 h1:DI0ffHyL3tFZ2UBEji3Aah7IvFwM5nY5yZoGvs1bnPY=
github.com/AdguardTeam/golibs v0.35.3/go.mod h1:9Y0yqUpwDNyHxCv4AaI42x5+qxYc7k5DWAfxtFOTn8o=
github.com/AdguardTeam/urlfilter v0.22.1 h1:nC2x0MSNwmTsXMTPfs1Gv6GZXKmK7prlzgjCdnE4fR8=
github.com/AdguardTeam/urlfilter v0.22.1/go.mod h1:+wUx7GApNWvFPALjNd5fTLix4PFvQF5Gprx6JDYwxfE=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
@@ -16,8 +16,8 @@ github.com/ameshkov/dnscrypt/v2 v2.4.0 h1:if6ZG2cuQmcP2TwSY+D0+8+xbPfoatufGlOQTM
github.com/ameshkov/dnscrypt/v2 v2.4.0/go.mod h1:WpEFV2uhebXb8Jhes/5/fSdpmhGV8TL22RDaeWwV6hI=
github.com/ameshkov/dnsstamps v1.0.3 h1:Srzik+J9mivH1alRACTbys2xOxs0lRH9qnTA7Y1OYVo=
github.com/ameshkov/dnsstamps v1.0.3/go.mod h1:Ii3eUu73dx4Vw5O4wjzmT5+lkCwovjzaEZZ4gKyIH5A=
github.com/anthropics/anthropic-sdk-go v1.14.0 h1:EzNQvnZlaDHe2UPkoUySDz3ixRgNbwKdH8KtFpv7pi4=
github.com/anthropics/anthropic-sdk-go v1.14.0/go.mod h1:WTz31rIUHUHqai2UslPpw5CwXrQP3geYBioRV4WOLvE=
github.com/anthropics/anthropic-sdk-go v1.19.0 h1:mO6E+ffSzLRvR/YUH9KJC0uGw0uV8GjISIuzem//3KE=
github.com/anthropics/anthropic-sdk-go v1.19.0/go.mod h1:WTz31rIUHUHqai2UslPpw5CwXrQP3geYBioRV4WOLvE=
github.com/axiomhq/hyperloglog v0.2.5 h1:Hefy3i8nAs8zAI/tDp+wE7N+Ltr8JnwiW3875pvl0N8=
github.com/axiomhq/hyperloglog v0.2.5/go.mod h1:DLUK9yIzpU5B6YFLjxTIcbHu1g4Y1WQb1m5RH3radaM=
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
@@ -44,8 +44,8 @@ github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo=
github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA=
github.com/getsentry/sentry-go v0.36.0 h1:UkCk0zV28PiGf+2YIONSSYiYhxwlERE5Li3JPpZqEns=
github.com/getsentry/sentry-go v0.36.0/go.mod h1:p5Im24mJBeruET8Q4bbcMfCQ+F+Iadc4L48tB1apo2c=
github.com/getsentry/sentry-go v0.36.2 h1:uhuxRPTrUy0dnSzTd0LrYXlBYygLkKY0hhlG5LXarzM=
github.com/getsentry/sentry-go v0.36.2/go.mod h1:p5Im24mJBeruET8Q4bbcMfCQ+F+Iadc4L48tB1apo2c=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -72,14 +72,14 @@ github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6 h1:EEHtgt9IwisQ2AZ4pI
github.com/google/pprof v0.0.0-20250820193118-f64d9cf942d6/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/google/renameio v0.1.0 h1:GOZbcHa3HfsPKPlmyPyN2KEohoMXOhdMbHrvbpl2QaA=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/renameio/v2 v2.0.0 h1:UifI23ZTGY8Tt29JbYFiuyIU3eX+RNFtUwefq9qAhxg=
github.com/google/renameio/v2 v2.0.0/go.mod h1:BtmJXm5YlszgC+TD4HOEEUFgkJP3nLxehU6hfe7jRt4=
github.com/google/renameio/v2 v2.0.1 h1:HyOM6qd9gF9sf15AvhbptGHUnaLTpEI9akAFFU3VyW0=
github.com/google/renameio/v2 v2.0.1/go.mod h1:BtmJXm5YlszgC+TD4HOEEUFgkJP3nLxehU6hfe7jRt4=
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/gookit/assert v0.1.1 h1:lh3GcawXe/p+cU7ESTZ5Ui3Sm/x8JWpIis4/1aF0mY0=
@@ -128,14 +128,14 @@ github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.1 h1:OTSON1P4DNxzTg4hmKCc37o4ZAZDv0cfXLkOt0oEowI=
github.com/prometheus/common v0.67.1/go.mod h1:RpmT9v35q2Y+lsieQsdOh5sXZ6ajUGC8NjZAmr8vb0Q=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/prometheus/common v0.67.2 h1:PcBAckGFTIHt2+L3I33uNRTlKTplNzFctXcWhPyAEN8=
github.com/prometheus/common v0.67.2/go.mod h1:63W3KZb1JOKgcjlIr64WW/LvFGAqKPj0atm+knVGEko=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
github.com/quic-go/qpack v0.5.1/go.mod h1:+PC4XFrEskIVkcLzpEkbLqq1uCoxPhQuvK5rH1ZgaEg=
github.com/quic-go/quic-go v0.55.0 h1:zccPQIqYCXDt5NmcEabyYvOnomjs8Tlwl7tISjJh9Mk=
github.com/quic-go/quic-go v0.55.0/go.mod h1:DR51ilwU1uE164KuWXhinFcKWGlEjzys2l8zUl5Ss1U=
github.com/quic-go/quic-go v0.56.0 h1:q/TW+OLismmXAehgFLczhCDTYB3bFmua4D9lsNBWxvY=
github.com/quic-go/quic-go v0.56.0/go.mod h1:9gx5KsFQtw2oZ6GZTyh+7YEvOxWCL9WZAepnHxgAo6c=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
@@ -187,32 +187,32 @@ go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
go.yaml.in/yaml/v4 v4.0.0-rc.2 h1:/FrI8D64VSr4HtGIlUtlFMGsm7H7pWTbj6vOLVZcA6s=
go.yaml.in/yaml/v4 v4.0.0-rc.2/go.mod h1:aZqd9kCMsGL7AuUv/m/PvWLdg5sjJsZ4oHDEnfPPfY0=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b h1:18qgiDvlvH7kk8Ioa8Ov+K6xCi0GMvmGfGW0sgd/SYA=
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/exp/typeparams v0.0.0-20251017212417-90e834f514db h1:zIIKf9uYLvsQHFOJ0O+SZ9iFRMNkoXzBRxOGDgr4xkA=
golang.org/x/exp/typeparams v0.0.0-20251017212417-90e834f514db/go.mod h1:4Mzdyp/6jzw9auFDJ3OMF5qksa7UvPnzKqTVGcb04ms=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20251014153721-24f779f6aaef h1:5xFtU4tmJMJSxSeDlr1dgBff2tDXrq0laLdS1EA3LYw=
golang.org/x/telemetry v0.0.0-20251014153721-24f779f6aaef/go.mod h1:Pi4ztBfryZoJEkyFTI5/Ocsu2jXyDr6iSdgJiYE/uwE=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
go.yaml.in/yaml/v4 v4.0.0-rc.3 h1:3h1fjsh1CTAPjW7q/EMe+C8shx5d8ctzZTrLcs/j8Go=
go.yaml.in/yaml/v4 v4.0.0-rc.3/go.mod h1:aZqd9kCMsGL7AuUv/m/PvWLdg5sjJsZ4oHDEnfPPfY0=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/exp/typeparams v0.0.0-20251125195548-87e1e737ad39 h1:yzGKB4T4r1nFi65o7dQ96ERTfU2trk8Ige9aqqADqf4=
golang.org/x/exp/typeparams v0.0.0-20251125195548-87e1e737ad39/go.mod h1:4Mzdyp/6jzw9auFDJ3OMF5qksa7UvPnzKqTVGcb04ms=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20251128220624-abf20d0e57ec h1:dRVkWZl6bUOp+oxnOe4BuyhWSIPmt29N4ooHarm7Ic8=
golang.org/x/telemetry v0.0.0-20251128220624-abf20d0e57ec/go.mod h1:hKdjCMrbv9skySur+Nek8Hd0uJ0GuxJIoIX2payrIdQ=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/tools/go/expect v0.1.1-deprecated h1:jpBZDwmgPhXsKZC6WhL20P4b/wmnpsEAGHaNy0n/rJM=
golang.org/x/tools/go/expect v0.1.1-deprecated/go.mod h1:eihoPOH+FgIqa3FpoTwguz/bVUSGBlGQU67vpBeOrBY=
golang.org/x/tools/go/packages/packagestest v0.1.1-deprecated h1:1h2MnaIAIXISqTFKdENegdpAgUXz6NrPEsbIeWaBRvM=
@@ -221,14 +221,14 @@ golang.org/x/vuln v1.1.4 h1:Ju8QsuyhX3Hk8ma3CesTbO8vfJD9EvUBgHvkxHBzj0I=
golang.org/x/vuln v1.1.4/go.mod h1:F+45wmU18ym/ca5PLTPLsSzr2KppzswxPP603ldA67s=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genai v1.31.0 h1:R7xDt/Dosz11vcXbZ4IgisGnzUGGau2PZOIOAnXsYjw=
google.golang.org/genai v1.31.0/go.mod h1:7pAilaICJlQBonjKKJNhftDFv3SREhZcTe9F6nRcjbg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251020155222-88f65dc88635 h1:3uycTxukehWrxH4HtPRtn1PDABTU331ViDjyqrUbaog=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251020155222-88f65dc88635/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.76.0 h1:UnVkv1+uMLYXoIz6o7chp59WfQUYA2ex/BXQ9rHZu7A=
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 h1:F29+wU6Ee6qgu9TddPgooOdaqsxTMunOoj8KA5yuS5A=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1/go.mod h1:5KF+wpkbTSbGcR9zteSqZV6fqFOWBl4Yde8En8MryZA=
google.golang.org/genai v1.36.0 h1:sJCIjqTAmwrtAIaemtTiKkg2TO1RxnYEusTmEQ3nGxM=
google.golang.org/genai v1.36.0/go.mod h1:A3kkl0nyBjyFlNjgxIwKq70julKbIxpSxqKO5gw/gmk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.6.0 h1:6Al3kEFFP9VJhRz3DID6quisgPnTeZVr4lep9kkxdPA=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.6.0/go.mod h1:QLvsjh0OIR0TYBeiu2bkWGTJBUNQ64st52iWj/yA93I=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -240,9 +240,9 @@ honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI=
honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4=
mvdan.cc/editorconfig v0.3.0 h1:D1D2wLYEYGpawWT5SpM5pRivgEgXjtEXwC9MWhEY0gQ=
mvdan.cc/editorconfig v0.3.0/go.mod h1:NcJHuDtNOTEJ6251indKiWuzK6+VcrMuLzGMLKBFupQ=
mvdan.cc/gofumpt v0.9.1 h1:p5YT2NfFWsYyTieYgwcQ8aKV3xRvFH4uuN/zB2gBbMQ=
mvdan.cc/gofumpt v0.9.1/go.mod h1:3xYtNemnKiXaTh6R4VtlqDATFwBbdXI8lJvH/4qk7mw=
mvdan.cc/gofumpt v0.9.2 h1:zsEMWL8SVKGHNztrx6uZrXdp7AX8r421Vvp23sz7ik4=
mvdan.cc/gofumpt v0.9.2/go.mod h1:iB7Hn+ai8lPvofHd9ZFGVg2GOr8sBUw1QUWjNbmIL/s=
mvdan.cc/sh/v3 v3.12.0 h1:ejKUR7ONP5bb+UGHGEG/k9V5+pRVIyD+LsZz7o8KHrI=
mvdan.cc/sh/v3 v3.12.0/go.mod h1:Se6Cj17eYSn+sNooLZiEUnNNmNxg0imoYlTu4CyaGyg=
mvdan.cc/unparam v0.0.0-20250301125049-0df0534333a4 h1:WjUu4yQoT5BHT1w8Zu56SP8367OuBV5jvo+4Ulppyf8=
mvdan.cc/unparam v0.0.0-20250301125049-0df0534333a4/go.mod h1:rthT7OuvRbaGcd5ginj6dA2oLE7YNlta9qhBNNdCaLE=
mvdan.cc/unparam v0.0.0-20251027182757-5beb8c8f8f15 h1:ssMzja7PDPJV8FStj7hq9IKiuiKhgz9ErWw+m68e7DI=
mvdan.cc/unparam v0.0.0-20251027182757-5beb8c8f8f15/go.mod h1:4M5MMXl2kW6fivUT6yRGpLLPNfuGtU2Z0cPvFquGDYU=

View File

@@ -1,4 +1,4 @@
go 1.25.3
go 1.25.5
use (
.

View File

File diff suppressed because it is too large Load Diff

View File

@@ -25,3 +25,26 @@ func ByteSlicesToIPs(data [][]byte) (ips []netip.Addr, err error) {
return ips, nil
}
// IPsToByteSlices is a wrapper around [netip.Addr.MarshalBinary] that ignores the
// always-nil errors.
func IPsToByteSlices(ips []netip.Addr) (data [][]byte) {
if ips == nil {
return nil
}
data = make([][]byte, 0, len(ips))
for _, ip := range ips {
data = append(data, IPToBytes(ip))
}
return data
}
// IPToBytes is a wrapper around [netip.Addr.MarshalBinary] that ignores the
// always-nil error.
func IPToBytes(ip netip.Addr) (b []byte) {
b, _ = ip.MarshalBinary()
return b
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.9
// protoc v6.32.0
// protoc-gen-go v1.36.10
// protoc v6.33.1
// source: dns.proto
package backendpb
@@ -307,14 +307,21 @@ func (x *DNSProfilesRequest) GetSyncTime() *timestamppb.Timestamp {
//
// TODO(a.garipov): Expand the field documentation.
type DNSProfile struct {
state protoimpl.MessageState `protogen:"open.v1"`
SafeBrowsing *SafeBrowsingSettings `protobuf:"bytes,5,opt,name=safe_browsing,json=safeBrowsing,proto3" json:"safe_browsing,omitempty"`
Parental *ParentalSettings `protobuf:"bytes,6,opt,name=parental,proto3" json:"parental,omitempty"`
RuleLists *RuleListsSettings `protobuf:"bytes,7,opt,name=rule_lists,json=ruleLists,proto3" json:"rule_lists,omitempty"`
FilteredResponseTtl *durationpb.Duration `protobuf:"bytes,10,opt,name=filtered_response_ttl,json=filteredResponseTtl,proto3" json:"filtered_response_ttl,omitempty"`
Access *AccessSettings `protobuf:"bytes,18,opt,name=access,proto3" json:"access,omitempty"`
RateLimit *RateLimitSettings `protobuf:"bytes,20,opt,name=rate_limit,json=rateLimit,proto3" json:"rate_limit,omitempty"`
CustomDomain *CustomDomainSettings `protobuf:"bytes,22,opt,name=custom_domain,json=customDomain,proto3" json:"custom_domain,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
SafeBrowsing *SafeBrowsingSettings `protobuf:"bytes,5,opt,name=safe_browsing,json=safeBrowsing,proto3" json:"safe_browsing,omitempty"`
Parental *ParentalSettings `protobuf:"bytes,6,opt,name=parental,proto3" json:"parental,omitempty"`
// *
// Field rule_lists contains settings for the rule-list filters. If absent,
// the rule-list filtering is disabled.
RuleLists *RuleListsSettings `protobuf:"bytes,7,opt,name=rule_lists,json=ruleLists,proto3" json:"rule_lists,omitempty"`
FilteredResponseTtl *durationpb.Duration `protobuf:"bytes,10,opt,name=filtered_response_ttl,json=filteredResponseTtl,proto3" json:"filtered_response_ttl,omitempty"`
Access *AccessSettings `protobuf:"bytes,18,opt,name=access,proto3" json:"access,omitempty"`
RateLimit *RateLimitSettings `protobuf:"bytes,20,opt,name=rate_limit,json=rateLimit,proto3" json:"rate_limit,omitempty"`
CustomDomain *CustomDomainSettings `protobuf:"bytes,22,opt,name=custom_domain,json=customDomain,proto3" json:"custom_domain,omitempty"`
// *
// Field category_filter contains settings for the category filtering. If
// absent, the category filtering is disabled.
CategoryFilter *CategoryFilterSettings `protobuf:"bytes,34,opt,name=category_filter,json=categoryFilter,proto3" json:"category_filter,omitempty"`
// *
// Field blocking_mode defines the blocking mode for general rule-list based
// filtering. If field deleted is false, field blocking_mode MUST be
@@ -469,6 +476,13 @@ func (x *DNSProfile) GetCustomDomain() *CustomDomainSettings {
return nil
}
func (x *DNSProfile) GetCategoryFilter() *CategoryFilterSettings {
if x != nil {
return x.CategoryFilter
}
return nil
}
func (x *DNSProfile) GetBlockingMode() isDNSProfile_BlockingMode {
if x != nil {
return x.BlockingMode
@@ -1441,10 +1455,21 @@ func (x *DayRange) GetEnd() *durationpb.Duration {
return nil
}
// *
// Message RuleListsSettings contains settings for the rule-list filters.
//
// The fields are ordered in a way that optimizes the generated structures'
// layouts.
type RuleListsSettings struct {
state protoimpl.MessageState `protogen:"open.v1"`
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"`
Ids []string `protobuf:"bytes,2,rep,name=ids,proto3" json:"ids,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
// *
// Field ids contains unique identifiers of the rule-list filters being used.
// IDs MUST be unique, between 1 and 128 bytes long, and only contain ASCII
// symbols with no slashes.
Ids []string `protobuf:"bytes,2,rep,name=ids,proto3" json:"ids,omitempty"`
// *
// Field enabled, if true, enables the rule-list filter feature.
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@@ -1479,13 +1504,6 @@ func (*RuleListsSettings) Descriptor() ([]byte, []int) {
return file_dns_proto_rawDescGZIP(), []int{15}
}
func (x *RuleListsSettings) GetEnabled() bool {
if x != nil {
return x.Enabled
}
return false
}
func (x *RuleListsSettings) GetIds() []string {
if x != nil {
return x.Ids
@@ -1493,6 +1511,13 @@ func (x *RuleListsSettings) GetIds() []string {
return nil
}
func (x *RuleListsSettings) GetEnabled() bool {
if x != nil {
return x.Enabled
}
return false
}
// *
// Message BlockingModeCustomIP contains custom IP addresses typically leading
// to a blocking page.
@@ -2801,6 +2826,69 @@ func (x *SessionTicket) GetData() []byte {
return nil
}
// *
// Message CategoryFilterSettings contains settings for the category filtering.
//
// The fields are ordered in a way that optimizes the generated structures'
// layouts.
type CategoryFilterSettings struct {
state protoimpl.MessageState `protogen:"open.v1"`
// *
// Field ids contains unique identifiers of the categories being blocked.
// IDs MUST be unique, between 1 and 128 bytes long, and only contain ASCII
// symbols with no slashes.
Ids []string `protobuf:"bytes,2,rep,name=ids,proto3" json:"ids,omitempty"`
// *
// Field enabled, if true, enables the category filter feature.
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CategoryFilterSettings) Reset() {
*x = CategoryFilterSettings{}
mi := &file_dns_proto_msgTypes[41]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CategoryFilterSettings) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CategoryFilterSettings) ProtoMessage() {}
func (x *CategoryFilterSettings) ProtoReflect() protoreflect.Message {
mi := &file_dns_proto_msgTypes[41]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CategoryFilterSettings.ProtoReflect.Descriptor instead.
func (*CategoryFilterSettings) Descriptor() ([]byte, []int) {
return file_dns_proto_rawDescGZIP(), []int{41}
}
func (x *CategoryFilterSettings) GetIds() []string {
if x != nil {
return x.Ids
}
return nil
}
func (x *CategoryFilterSettings) GetEnabled() bool {
if x != nil {
return x.Enabled
}
return false
}
type DeviceSettingsChange_Deleted struct {
state protoimpl.MessageState `protogen:"open.v1"`
DeviceId string `protobuf:"bytes,1,opt,name=device_id,json=deviceId,proto3" json:"device_id,omitempty"`
@@ -2810,7 +2898,7 @@ type DeviceSettingsChange_Deleted struct {
func (x *DeviceSettingsChange_Deleted) Reset() {
*x = DeviceSettingsChange_Deleted{}
mi := &file_dns_proto_msgTypes[41]
mi := &file_dns_proto_msgTypes[42]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2822,7 +2910,7 @@ func (x *DeviceSettingsChange_Deleted) String() string {
func (*DeviceSettingsChange_Deleted) ProtoMessage() {}
func (x *DeviceSettingsChange_Deleted) ProtoReflect() protoreflect.Message {
mi := &file_dns_proto_msgTypes[41]
mi := &file_dns_proto_msgTypes[42]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2854,7 +2942,7 @@ type DeviceSettingsChange_Upserted struct {
func (x *DeviceSettingsChange_Upserted) Reset() {
*x = DeviceSettingsChange_Upserted{}
mi := &file_dns_proto_msgTypes[42]
mi := &file_dns_proto_msgTypes[43]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2866,7 +2954,7 @@ func (x *DeviceSettingsChange_Upserted) String() string {
func (*DeviceSettingsChange_Upserted) ProtoMessage() {}
func (x *DeviceSettingsChange_Upserted) ProtoReflect() protoreflect.Message {
mi := &file_dns_proto_msgTypes[42]
mi := &file_dns_proto_msgTypes[43]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2899,7 +2987,7 @@ type CustomDomain_Pending struct {
func (x *CustomDomain_Pending) Reset() {
*x = CustomDomain_Pending{}
mi := &file_dns_proto_msgTypes[43]
mi := &file_dns_proto_msgTypes[44]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2911,7 +2999,7 @@ func (x *CustomDomain_Pending) String() string {
func (*CustomDomain_Pending) ProtoMessage() {}
func (x *CustomDomain_Pending) ProtoReflect() protoreflect.Message {
mi := &file_dns_proto_msgTypes[43]
mi := &file_dns_proto_msgTypes[44]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2953,7 +3041,7 @@ type CustomDomain_Current struct {
func (x *CustomDomain_Current) Reset() {
*x = CustomDomain_Current{}
mi := &file_dns_proto_msgTypes[44]
mi := &file_dns_proto_msgTypes[45]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2965,7 +3053,7 @@ func (x *CustomDomain_Current) String() string {
func (*CustomDomain_Current) ProtoMessage() {}
func (x *CustomDomain_Current) ProtoReflect() protoreflect.Message {
mi := &file_dns_proto_msgTypes[44]
mi := &file_dns_proto_msgTypes[45]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3022,7 +3110,7 @@ const file_dns_proto_rawDesc = "" +
"\x1cGlobalAccessSettingsResponse\x12+\n" +
"\bstandard\x18\x01 \x01(\v2\x0f.AccessSettingsR\bstandard\"M\n" +
"\x12DNSProfilesRequest\x127\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\"\xf7\x10\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\"\xb9\x11\n" +
"\n" +
"DNSProfile\x12:\n" +
"\rsafe_browsing\x18\x05 \x01(\v2\x15.SafeBrowsingSettingsR\fsafeBrowsing\x12-\n" +
@@ -3034,7 +3122,8 @@ const file_dns_proto_rawDesc = "" +
"\x06access\x18\x12 \x01(\v2\x0f.AccessSettingsR\x06access\x121\n" +
"\n" +
"rate_limit\x18\x14 \x01(\v2\x12.RateLimitSettingsR\trateLimit\x12:\n" +
"\rcustom_domain\x18\x16 \x01(\v2\x15.CustomDomainSettingsR\fcustomDomain\x12N\n" +
"\rcustom_domain\x18\x16 \x01(\v2\x15.CustomDomainSettingsR\fcustomDomain\x12@\n" +
"\x0fcategory_filter\x18\" \x01(\v2\x17.CategoryFilterSettingsR\x0ecategoryFilter\x12N\n" +
"\x17blocking_mode_custom_ip\x18\r \x01(\v2\x15.BlockingModeCustomIPH\x00R\x14blockingModeCustomIp\x12M\n" +
"\x16blocking_mode_nxdomain\x18\x0e \x01(\v2\x15.BlockingModeNXDOMAINH\x00R\x14blockingModeNxdomain\x12H\n" +
"\x15blocking_mode_null_ip\x18\x0f \x01(\v2\x13.BlockingModeNullIPH\x00R\x12blockingModeNullIp\x12J\n" +
@@ -3124,9 +3213,9 @@ const file_dns_proto_rawDesc = "" +
"\bDayRange\x12/\n" +
"\x05start\x18\x01 \x01(\v2\x19.google.protobuf.DurationR\x05start\x12+\n" +
"\x03end\x18\x02 \x01(\v2\x19.google.protobuf.DurationR\x03end\"?\n" +
"\x11RuleListsSettings\x12\x18\n" +
"\aenabled\x18\x01 \x01(\bR\aenabled\x12\x10\n" +
"\x03ids\x18\x02 \x03(\tR\x03ids\">\n" +
"\x11RuleListsSettings\x12\x10\n" +
"\x03ids\x18\x02 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x01 \x01(\bR\aenabled\">\n" +
"\x14BlockingModeCustomIP\x12\x12\n" +
"\x04ipv4\x18\x01 \x01(\fR\x04ipv4\x12\x12\n" +
"\x04ipv6\x18\x02 \x01(\fR\x04ipv6\"\x16\n" +
@@ -3203,7 +3292,10 @@ const file_dns_proto_rawDesc = "" +
"\atickets\x18\x01 \x03(\v2\x0e.SessionTicketR\atickets\"7\n" +
"\rSessionTicket\x12\x12\n" +
"\x04name\x18\x01 \x01(\tR\x04name\x12\x12\n" +
"\x04data\x18\x02 \x01(\fR\x04data*\x87\x01\n" +
"\x04data\x18\x02 \x01(\fR\x04data\"D\n" +
"\x16CategoryFilterSettings\x12\x10\n" +
"\x03ids\x18\x02 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x01 \x01(\bR\aenabled*\x87\x01\n" +
"\n" +
"DeviceType\x12\v\n" +
"\aINVALID\x10\x00\x12\v\n" +
@@ -3246,7 +3338,7 @@ func file_dns_proto_rawDescGZIP() []byte {
}
var file_dns_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_dns_proto_msgTypes = make([]protoimpl.MessageInfo, 45)
var file_dns_proto_msgTypes = make([]protoimpl.MessageInfo, 46)
var file_dns_proto_goTypes = []any{
(DeviceType)(0), // 0: DeviceType
(*RateLimitSettingsRequest)(nil), // 1: RateLimitSettingsRequest
@@ -3290,93 +3382,95 @@ var file_dns_proto_goTypes = []any{
(*SessionTicketRequest)(nil), // 39: SessionTicketRequest
(*SessionTicketResponse)(nil), // 40: SessionTicketResponse
(*SessionTicket)(nil), // 41: SessionTicket
(*DeviceSettingsChange_Deleted)(nil), // 42: DeviceSettingsChange.Deleted
(*DeviceSettingsChange_Upserted)(nil), // 43: DeviceSettingsChange.Upserted
(*CustomDomain_Pending)(nil), // 44: CustomDomain.Pending
(*CustomDomain_Current)(nil), // 45: CustomDomain.Current
(*timestamppb.Timestamp)(nil), // 46: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 47: google.protobuf.Duration
(*emptypb.Empty)(nil), // 48: google.protobuf.Empty
(*CategoryFilterSettings)(nil), // 42: CategoryFilterSettings
(*DeviceSettingsChange_Deleted)(nil), // 43: DeviceSettingsChange.Deleted
(*DeviceSettingsChange_Upserted)(nil), // 44: DeviceSettingsChange.Upserted
(*CustomDomain_Pending)(nil), // 45: CustomDomain.Pending
(*CustomDomain_Current)(nil), // 46: CustomDomain.Current
(*timestamppb.Timestamp)(nil), // 47: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 48: google.protobuf.Duration
(*emptypb.Empty)(nil), // 49: google.protobuf.Empty
}
var file_dns_proto_depIdxs = []int32{
23, // 0: RateLimitSettingsResponse.allowed_subnets:type_name -> CidrRange
22, // 1: GlobalAccessSettingsResponse.standard:type_name -> AccessSettings
46, // 2: DNSProfilesRequest.sync_time:type_name -> google.protobuf.Timestamp
47, // 2: DNSProfilesRequest.sync_time:type_name -> google.protobuf.Timestamp
10, // 3: DNSProfile.safe_browsing:type_name -> SafeBrowsingSettings
12, // 4: DNSProfile.parental:type_name -> ParentalSettings
16, // 5: DNSProfile.rule_lists:type_name -> RuleListsSettings
47, // 6: DNSProfile.filtered_response_ttl:type_name -> google.protobuf.Duration
48, // 6: DNSProfile.filtered_response_ttl:type_name -> google.protobuf.Duration
22, // 7: DNSProfile.access:type_name -> AccessSettings
32, // 8: DNSProfile.rate_limit:type_name -> RateLimitSettings
8, // 9: DNSProfile.custom_domain:type_name -> CustomDomainSettings
17, // 10: DNSProfile.blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 11: DNSProfile.blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 12: DNSProfile.blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 13: DNSProfile.blocking_mode_refused:type_name -> BlockingModeREFUSED
17, // 14: DNSProfile.adult_blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 15: DNSProfile.adult_blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 16: DNSProfile.adult_blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 17: DNSProfile.adult_blocking_mode_refused:type_name -> BlockingModeREFUSED
17, // 18: DNSProfile.safe_browsing_blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 19: DNSProfile.safe_browsing_blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 20: DNSProfile.safe_browsing_blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 21: DNSProfile.safe_browsing_blocking_mode_refused:type_name -> BlockingModeREFUSED
11, // 22: DNSProfile.devices:type_name -> DeviceSettings
7, // 23: DNSProfile.device_changes:type_name -> DeviceSettingsChange
42, // 24: DeviceSettingsChange.deleted:type_name -> DeviceSettingsChange.Deleted
43, // 25: DeviceSettingsChange.upserted:type_name -> DeviceSettingsChange.Upserted
9, // 26: CustomDomainSettings.domains:type_name -> CustomDomain
44, // 27: CustomDomain.pending:type_name -> CustomDomain.Pending
45, // 28: CustomDomain.current:type_name -> CustomDomain.Current
24, // 29: DeviceSettings.authentication:type_name -> AuthenticationSettings
13, // 30: ParentalSettings.schedule:type_name -> ScheduleSettings
14, // 31: ScheduleSettings.weekly_range:type_name -> WeeklyRange
15, // 32: WeeklyRange.mon:type_name -> DayRange
15, // 33: WeeklyRange.tue:type_name -> DayRange
15, // 34: WeeklyRange.wed:type_name -> DayRange
15, // 35: WeeklyRange.thu:type_name -> DayRange
15, // 36: WeeklyRange.fri:type_name -> DayRange
15, // 37: WeeklyRange.sat:type_name -> DayRange
15, // 38: WeeklyRange.sun:type_name -> DayRange
47, // 39: DayRange.start:type_name -> google.protobuf.Duration
47, // 40: DayRange.end:type_name -> google.protobuf.Duration
46, // 41: DeviceBillingStat.last_activity_time:type_name -> google.protobuf.Timestamp
23, // 42: AccessSettings.allowlist_cidr:type_name -> CidrRange
23, // 43: AccessSettings.blocklist_cidr:type_name -> CidrRange
0, // 44: CreateDeviceRequest.device_type:type_name -> DeviceType
11, // 45: CreateDeviceResponse.device:type_name -> DeviceSettings
47, // 46: RateLimitedError.retry_delay:type_name -> google.protobuf.Duration
23, // 47: RateLimitSettings.client_cidr:type_name -> CidrRange
48, // 48: RemoteKVGetResponse.empty:type_name -> google.protobuf.Empty
47, // 49: RemoteKVSetRequest.ttl:type_name -> google.protobuf.Duration
41, // 50: SessionTicketResponse.tickets:type_name -> SessionTicket
11, // 51: DeviceSettingsChange.Upserted.device:type_name -> DeviceSettings
46, // 52: CustomDomain.Pending.expire:type_name -> google.protobuf.Timestamp
46, // 53: CustomDomain.Current.not_before:type_name -> google.protobuf.Timestamp
46, // 54: CustomDomain.Current.not_after:type_name -> google.protobuf.Timestamp
5, // 55: DNSService.getDNSProfiles:input_type -> DNSProfilesRequest
21, // 56: DNSService.saveDevicesBillingStat:input_type -> DeviceBillingStat
25, // 57: DNSService.createDeviceByHumanId:input_type -> CreateDeviceRequest
1, // 58: RateLimitService.getRateLimitSettings:input_type -> RateLimitSettingsRequest
3, // 59: RateLimitService.getGlobalAccessSettings:input_type -> GlobalAccessSettingsRequest
33, // 60: RemoteKVService.get:input_type -> RemoteKVGetRequest
35, // 61: RemoteKVService.set:input_type -> RemoteKVSetRequest
37, // 62: CustomDomainService.getCustomDomainCertificate:input_type -> CustomDomainCertificateRequest
39, // 63: SessionTicketService.getSessionTickets:input_type -> SessionTicketRequest
6, // 64: DNSService.getDNSProfiles:output_type -> DNSProfile
48, // 65: DNSService.saveDevicesBillingStat:output_type -> google.protobuf.Empty
26, // 66: DNSService.createDeviceByHumanId:output_type -> CreateDeviceResponse
2, // 67: RateLimitService.getRateLimitSettings:output_type -> RateLimitSettingsResponse
4, // 68: RateLimitService.getGlobalAccessSettings:output_type -> GlobalAccessSettingsResponse
34, // 69: RemoteKVService.get:output_type -> RemoteKVGetResponse
36, // 70: RemoteKVService.set:output_type -> RemoteKVSetResponse
38, // 71: CustomDomainService.getCustomDomainCertificate:output_type -> CustomDomainCertificateResponse
40, // 72: SessionTicketService.getSessionTickets:output_type -> SessionTicketResponse
64, // [64:73] is the sub-list for method output_type
55, // [55:64] is the sub-list for method input_type
55, // [55:55] is the sub-list for extension type_name
55, // [55:55] is the sub-list for extension extendee
0, // [0:55] is the sub-list for field type_name
42, // 10: DNSProfile.category_filter:type_name -> CategoryFilterSettings
17, // 11: DNSProfile.blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 12: DNSProfile.blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 13: DNSProfile.blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 14: DNSProfile.blocking_mode_refused:type_name -> BlockingModeREFUSED
17, // 15: DNSProfile.adult_blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 16: DNSProfile.adult_blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 17: DNSProfile.adult_blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 18: DNSProfile.adult_blocking_mode_refused:type_name -> BlockingModeREFUSED
17, // 19: DNSProfile.safe_browsing_blocking_mode_custom_ip:type_name -> BlockingModeCustomIP
18, // 20: DNSProfile.safe_browsing_blocking_mode_nxdomain:type_name -> BlockingModeNXDOMAIN
19, // 21: DNSProfile.safe_browsing_blocking_mode_null_ip:type_name -> BlockingModeNullIP
20, // 22: DNSProfile.safe_browsing_blocking_mode_refused:type_name -> BlockingModeREFUSED
11, // 23: DNSProfile.devices:type_name -> DeviceSettings
7, // 24: DNSProfile.device_changes:type_name -> DeviceSettingsChange
43, // 25: DeviceSettingsChange.deleted:type_name -> DeviceSettingsChange.Deleted
44, // 26: DeviceSettingsChange.upserted:type_name -> DeviceSettingsChange.Upserted
9, // 27: CustomDomainSettings.domains:type_name -> CustomDomain
45, // 28: CustomDomain.pending:type_name -> CustomDomain.Pending
46, // 29: CustomDomain.current:type_name -> CustomDomain.Current
24, // 30: DeviceSettings.authentication:type_name -> AuthenticationSettings
13, // 31: ParentalSettings.schedule:type_name -> ScheduleSettings
14, // 32: ScheduleSettings.weekly_range:type_name -> WeeklyRange
15, // 33: WeeklyRange.mon:type_name -> DayRange
15, // 34: WeeklyRange.tue:type_name -> DayRange
15, // 35: WeeklyRange.wed:type_name -> DayRange
15, // 36: WeeklyRange.thu:type_name -> DayRange
15, // 37: WeeklyRange.fri:type_name -> DayRange
15, // 38: WeeklyRange.sat:type_name -> DayRange
15, // 39: WeeklyRange.sun:type_name -> DayRange
48, // 40: DayRange.start:type_name -> google.protobuf.Duration
48, // 41: DayRange.end:type_name -> google.protobuf.Duration
47, // 42: DeviceBillingStat.last_activity_time:type_name -> google.protobuf.Timestamp
23, // 43: AccessSettings.allowlist_cidr:type_name -> CidrRange
23, // 44: AccessSettings.blocklist_cidr:type_name -> CidrRange
0, // 45: CreateDeviceRequest.device_type:type_name -> DeviceType
11, // 46: CreateDeviceResponse.device:type_name -> DeviceSettings
48, // 47: RateLimitedError.retry_delay:type_name -> google.protobuf.Duration
23, // 48: RateLimitSettings.client_cidr:type_name -> CidrRange
49, // 49: RemoteKVGetResponse.empty:type_name -> google.protobuf.Empty
48, // 50: RemoteKVSetRequest.ttl:type_name -> google.protobuf.Duration
41, // 51: SessionTicketResponse.tickets:type_name -> SessionTicket
11, // 52: DeviceSettingsChange.Upserted.device:type_name -> DeviceSettings
47, // 53: CustomDomain.Pending.expire:type_name -> google.protobuf.Timestamp
47, // 54: CustomDomain.Current.not_before:type_name -> google.protobuf.Timestamp
47, // 55: CustomDomain.Current.not_after:type_name -> google.protobuf.Timestamp
5, // 56: DNSService.getDNSProfiles:input_type -> DNSProfilesRequest
21, // 57: DNSService.saveDevicesBillingStat:input_type -> DeviceBillingStat
25, // 58: DNSService.createDeviceByHumanId:input_type -> CreateDeviceRequest
1, // 59: RateLimitService.getRateLimitSettings:input_type -> RateLimitSettingsRequest
3, // 60: RateLimitService.getGlobalAccessSettings:input_type -> GlobalAccessSettingsRequest
33, // 61: RemoteKVService.get:input_type -> RemoteKVGetRequest
35, // 62: RemoteKVService.set:input_type -> RemoteKVSetRequest
37, // 63: CustomDomainService.getCustomDomainCertificate:input_type -> CustomDomainCertificateRequest
39, // 64: SessionTicketService.getSessionTickets:input_type -> SessionTicketRequest
6, // 65: DNSService.getDNSProfiles:output_type -> DNSProfile
49, // 66: DNSService.saveDevicesBillingStat:output_type -> google.protobuf.Empty
26, // 67: DNSService.createDeviceByHumanId:output_type -> CreateDeviceResponse
2, // 68: RateLimitService.getRateLimitSettings:output_type -> RateLimitSettingsResponse
4, // 69: RateLimitService.getGlobalAccessSettings:output_type -> GlobalAccessSettingsResponse
34, // 70: RemoteKVService.get:output_type -> RemoteKVGetResponse
36, // 71: RemoteKVService.set:output_type -> RemoteKVSetResponse
38, // 72: CustomDomainService.getCustomDomainCertificate:output_type -> CustomDomainCertificateResponse
40, // 73: SessionTicketService.getSessionTickets:output_type -> SessionTicketResponse
65, // [65:74] is the sub-list for method output_type
56, // [56:65] is the sub-list for method input_type
56, // [56:56] is the sub-list for extension type_name
56, // [56:56] is the sub-list for extension extendee
0, // [0:56] is the sub-list for field type_name
}
func init() { file_dns_proto_init() }
@@ -3419,7 +3513,7 @@ func file_dns_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_dns_proto_rawDesc), len(file_dns_proto_rawDesc)),
NumEnums: 1,
NumMessages: 45,
NumMessages: 46,
NumExtensions: 0,
NumServices: 5,
},

View File

@@ -125,12 +125,24 @@ message DNSProfile {
SafeBrowsingSettings safe_browsing = 5;
ParentalSettings parental = 6;
/**
* Field rule_lists contains settings for the rule-list filters. If absent,
* the rule-list filtering is disabled.
*/
RuleListsSettings rule_lists = 7;
google.protobuf.Duration filtered_response_ttl = 10;
AccessSettings access = 18;
RateLimitSettings rate_limit = 20;
CustomDomainSettings custom_domain = 22;
/**
* Field category_filter contains settings for the category filtering. If
* absent, the category filtering is disabled.
*/
CategoryFilterSettings category_filter = 34;
// One-of fields
/**
@@ -316,9 +328,24 @@ message DayRange {
google.protobuf.Duration end = 2;
}
/**
* Message RuleListsSettings contains settings for the rule-list filters.
*
* The fields are ordered in a way that optimizes the generated structures'
* layouts.
*/
message RuleListsSettings {
bool enabled = 1;
/**
* Field ids contains unique identifiers of the rule-list filters being used.
* IDs MUST be unique, between 1 and 128 bytes long, and only contain ASCII
* symbols with no slashes.
*/
repeated string ids = 2;
/**
* Field enabled, if true, enables the rule-list filter feature.
*/
bool enabled = 1;
}
/**
@@ -466,3 +493,23 @@ message SessionTicket {
string name = 1;
bytes data = 2;
}
/**
* Message CategoryFilterSettings contains settings for the category filtering.
*
* The fields are ordered in a way that optimizes the generated structures'
* layouts.
*/
message CategoryFilterSettings {
/**
* Field ids contains unique identifiers of the categories being blocked.
* IDs MUST be unique, between 1 and 128 bytes long, and only contain ASCII
* symbols with no slashes.
*/
repeated string ids = 2;
/**
* Field enabled, if true, enables the category filter feature.
*/
bool enabled = 1;
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.5.1
// - protoc v6.32.0
// - protoc v6.33.1
// source: dns.proto
package backendpb

View File

@@ -25,8 +25,11 @@ func (x *ParentalSettings) toInternal(
ctx context.Context,
errColl errcoll.Interface,
logger *slog.Logger,
categoryFilter *CategoryFilterSettings,
) (c *filter.ConfigParental, err error) {
c = &filter.ConfigParental{}
c = &filter.ConfigParental{
Categories: categoryFilter.toInternal(ctx, errColl, logger),
}
if x == nil {
return c, nil
}
@@ -512,3 +515,34 @@ func (x *RuleListsSettings) toInternal(
return c
}
// toInternal is a helper that converts category filter settings from backend
// response to AdGuard DNS filter categories configuration. If x is nil,
// toInternal returns a disabled configuration.
func (x *CategoryFilterSettings) toInternal(
ctx context.Context,
errColl errcoll.Interface,
logger *slog.Logger,
) (c *filter.ConfigCategories) {
c = &filter.ConfigCategories{}
if x == nil {
return c
}
c.Enabled = x.Enabled
c.IDs = make([]filter.CategoryID, 0, len(x.Ids))
for i, idStr := range x.Ids {
id, err := filter.NewCategoryID(idStr)
if err != nil {
err = fmt.Errorf("at index %d: %w", i, err)
errcoll.Collect(ctx, errColl, logger, "converting category id", err)
continue
}
c.IDs = append(c.IDs, id)
}
return c
}

View File

@@ -328,7 +328,7 @@ func (s *ProfileStorage) newFilterConfig(
logger *slog.Logger,
errColl errcoll.Interface,
) (conf *filter.ConfigClient, err error) {
parental, err := p.Parental.toInternal(ctx, s.errColl, s.logger)
parental, err := p.Parental.toInternal(ctx, s.errColl, s.logger, p.CategoryFilter)
if err != nil {
return nil, fmt.Errorf("parental: %w", err)
}

View File

@@ -631,7 +631,11 @@ func NewTestDNSProfile(tb testing.TB) (dp *DNSProfile) {
},
BlockChromePrefetch: true,
CustomDomain: customDomain,
AccountId: TestAccountIDStr,
CategoryFilter: &CategoryFilterSettings{
Ids: []string{"games"},
Enabled: true,
},
AccountId: TestAccountIDStr,
}
}
@@ -666,7 +670,13 @@ func newProfile(tb testing.TB) (p *agd.Profile) {
Enabled: true,
}
wantCategories := &filter.ConfigCategories{
Enabled: true,
IDs: []filter.CategoryID{"games"},
}
wantParental := &filter.ConfigParental{
Categories: wantCategories,
PauseSchedule: &filter.ConfigSchedule{
Week: &filter.WeeklySchedule{
nil,

View File

@@ -763,6 +763,17 @@ func (b *builder) initFilterStorage(ctx context.Context) (err error) {
blockedSvcIdxURL = &b.env.BlockedServiceIndexURL.URL
}
var domainMtrc *metrics.DomainFilter
var categoryIdxURL *url.URL
if b.env.CategoryFilterEnabled {
categoryIdxURL = &b.env.CategoryFilterIndexURL.URL
domainMtrc, err = metrics.NewDomainFilter(b.mtrcNamespace, b.promRegisterer)
if err != nil {
return fmt.Errorf("registering domain filter metrics: %w", err)
}
}
b.filterStorage, err = filterstorage.New(&filterstorage.Config{
BaseLogger: b.baseLogger,
Logger: b.baseLogger.With(slogutil.KeyPrefix, filter.StoragePrefix),
@@ -780,6 +791,22 @@ func (b *builder) initFilterStorage(ctx context.Context) (err error) {
ResultCacheEnabled: c.RuleListCache.Enabled,
Enabled: bool(b.env.BlockedServiceEnabled),
},
CategoryDomainsIndex: &filterstorage.IndexConfig{
IndexURL: categoryIdxURL,
// TODO(a.garipov): Consider adding a separate parameter here.
IndexMaxSize: c.MaxSize,
MaxSize: c.MaxSize,
IndexRefreshTimeout: time.Duration(c.IndexRefreshTimeout),
// TODO(a.garipov): Consider adding a separate parameter here.
IndexStaleness: refrIvl,
// TODO(a.garipov): Consider adding a separate parameter here.
RefreshTimeout: refrTimeout,
// TODO(a.garipov): Consider adding a separate parameter here.
Staleness: refrIvl,
ResultCacheCount: c.RuleListCache.Size,
ResultCacheEnabled: c.RuleListCache.Enabled,
Enabled: bool(b.env.CategoryFilterEnabled),
},
Custom: &filterstorage.CustomConfig{
CacheCount: c.CustomFilterCacheSize,
},
@@ -788,7 +815,7 @@ func (b *builder) initFilterStorage(ctx context.Context) (err error) {
Dangerous: b.safeBrowsing,
NewlyRegistered: b.newRegDomains,
},
RuleLists: &filterstorage.RuleListsConfig{
RuleListsIndex: &filterstorage.IndexConfig{
IndexURL: &b.env.FilterIndexURL.URL,
// TODO(a.garipov): Consider adding a separate parameter here.
IndexMaxSize: c.MaxSize,
@@ -801,6 +828,8 @@ func (b *builder) initFilterStorage(ctx context.Context) (err error) {
Staleness: refrIvl,
ResultCacheCount: c.RuleListCache.Size,
ResultCacheEnabled: c.RuleListCache.Enabled,
// TODO(a.garipov): Consider making configurable.
Enabled: true,
},
SafeSearchGeneral: b.newSafeSearchConfig(
b.env.GeneralSafeSearchURL,
@@ -812,11 +841,13 @@ func (b *builder) initFilterStorage(ctx context.Context) (err error) {
filter.IDYoutubeSafeSearch,
bool(b.env.YoutubeSafeSearchEnabled),
),
CacheManager: b.cacheManager,
Clock: timeutil.SystemClock{},
ErrColl: b.errColl,
Metrics: b.filterMtrc,
CacheDir: b.env.FilterCachePath,
CacheManager: b.cacheManager,
Clock: timeutil.SystemClock{},
DomainMetrics: domainMtrc,
ErrColl: b.errColl,
Metrics: b.filterMtrc,
CacheDir: b.env.FilterCachePath,
DomainFilterSubDomainNum: defaultSubDomainNum,
})
if err != nil {
return fmt.Errorf("creating default filter storage: %w", err)
@@ -1513,6 +1544,7 @@ func (b *builder) initProfileDB(ctx context.Context) (err error) {
Metrics: profDBMtrc,
Storage: strg,
CacheFilePath: b.env.ProfilesCachePath,
CacheFileIvl: time.Duration(b.env.ProfilesCacheIvl),
FullSyncIvl: time.Duration(c.FullRefreshIvl),
FullSyncRetryIvl: time.Duration(c.FullRefreshRetryIvl),
ResponseSizeEstimate: respSzEst,

View File

@@ -36,6 +36,7 @@ type environment struct {
BackendRateLimitURL *urlutil.URL `env:"BACKEND_RATELIMIT_URL"`
BillStatURL *urlutil.URL `env:"BILLSTAT_URL"`
BlockedServiceIndexURL *urlutil.URL `env:"BLOCKED_SERVICE_INDEX_URL"`
CategoryFilterIndexURL *urlutil.URL `env:"CATEGORY_FILTER_INDEX_URL"`
ConsulAllowlistURL *urlutil.URL `env:"CONSUL_ALLOWLIST_URL"`
ConsulDNSCheckKVURL *urlutil.URL `env:"CONSUL_DNSCHECK_KV_URL"`
ConsulDNSCheckSessionURL *urlutil.URL `env:"CONSUL_DNSCHECK_SESSION_URL"`
@@ -90,6 +91,7 @@ type environment struct {
CustomDomainsRefreshIvl timeutil.Duration `env:"CUSTOM_DOMAINS_REFRESH_INTERVAL"`
DNSCheckKVTTL timeutil.Duration `env:"DNSCHECK_KV_TTL"`
ProfilesCacheIvl timeutil.Duration `env:"PROFILES_CACHE_INTERVAL"`
SessionTicketRefreshIvl timeutil.Duration `env:"SESSION_TICKET_REFRESH_INTERVAL"`
StandardAccessRefreshIvl timeutil.Duration `env:"STANDARD_ACCESS_REFRESH_INTERVAL"`
StandardAccessTimeout timeutil.Duration `env:"STANDARD_ACCESS_TIMEOUT"`
@@ -105,6 +107,7 @@ type environment struct {
Verbosity uint8 `env:"VERBOSE" envDefault:"0"`
AdultBlockingEnabled strictBool `env:"ADULT_BLOCKING_ENABLED" envDefault:"1"`
CategoryFilterEnabled strictBool `env:"CATEGORY_FILTER_ENABLED" envDefault:"0"`
CrashOutputEnabled strictBool `env:"CRASH_OUTPUT_ENABLED" envDefault:"0"`
CustomDomainsEnabled strictBool `env:"CUSTOM_DOMAINS_ENABLED" envDefault:"1"`
LogTimestamp strictBool `env:"LOG_TIMESTAMP" envDefault:"1"`
@@ -147,6 +150,11 @@ func (envs *environment) Validate() (err error) {
))
}
err = envs.validateCategoryFilterIndex()
if err != nil {
errs = append(errs, fmt.Errorf("CATEGORY_FILTER_INDEX_URL: %w", err))
}
err = envs.validateWebStaticDir()
if err != nil {
errs = append(errs, fmt.Errorf("WEB_STATIC_DIR: %w", err))
@@ -248,6 +256,25 @@ func (envs *environment) validateHTTPURLs(errs []error) (res []error) {
return res
}
// validateCategoryFilterIndex returns an error if the CATEGORY_FILTER_INDEX_URL
// environment variable contains an invalid value.
func (envs *environment) validateCategoryFilterIndex() (err error) {
if !envs.CategoryFilterEnabled {
return nil
}
if envs.CategoryFilterIndexURL == nil {
return errors.ErrNoValue
}
s := envs.CategoryFilterIndexURL.Scheme
if !strings.EqualFold(s, urlutil.SchemeFile) && !urlutil.IsValidHTTPURLScheme(s) {
return errors.Error("not a valid http(s) url or file uri")
}
return nil
}
// validateWebStaticDir returns an error if the WEB_STATIC_DIR environment
// variable contains an invalid value.
func (envs *environment) validateWebStaticDir() (err error) {
@@ -442,21 +469,19 @@ func (envs *environment) validateStandardAccess(errs []error) (res []error) {
// validateProfilesConf returns an error if environment variables for profiles
// database configuration contain errors.
func (envs *environment) validateProfilesConf(profilesEnabled bool) (err error) {
var errs []error
if profilesEnabled {
errs = envs.validateProfilesURLs(errs)
err = validate.NoGreaterThan(
"PROFILES_MAX_RESP_SIZE",
envs.ProfilesMaxRespSize,
math.MaxInt,
)
if err != nil {
errs = append(errs, err)
}
if !profilesEnabled {
return nil
}
var errs []error
errs = envs.validateProfilesURLs(errs)
errs = append(
errs,
validate.NoGreaterThan("PROFILES_MAX_RESP_SIZE", envs.ProfilesMaxRespSize, math.MaxInt),
validate.Positive("PROFILES_CACHE_INTERVAL", envs.ProfilesCacheIvl),
)
return errors.Join(errs...)
}

View File

@@ -114,6 +114,9 @@ type fltGrpParental struct {
// group. c must be valid.
func (c *fltGrpParental) toInternal() (fltConf *filter.ConfigParental) {
return &filter.ConfigParental{
Categories: &filter.ConfigCategories{
Enabled: false,
},
PauseSchedule: nil,
BlockedServices: nil,
Enabled: c.Enabled,

View File

@@ -2,71 +2,56 @@ package dnsservertest
import (
"context"
"slices"
"sync"
"github.com/quic-go/quic-go/logging"
"github.com/quic-go/quic-go"
"github.com/quic-go/quic-go/qlog"
"github.com/quic-go/quic-go/qlogwriter"
)
// QUICTracer is a helper structure for tracing QUIC connections.
type QUICTracer struct {
// mu protects fields of *QUICTracer and also protects fields of every
// nested *quicConnTracer.
mu *sync.Mutex
connTracers []*quicConnTracer
// Tracer collects QUIC connection traces for testing.
//
// TODO(f.setrakov): Consider moving to golibs.
type Tracer struct {
tracers []*quicTracer
}
// NewQUICTracer returns a new QUIC tracer helper.
func NewQUICTracer() (t *QUICTracer) {
return &QUICTracer{
mu: &sync.Mutex{},
}
}
// TracerForConnection implements the logging.Tracer interface for *quicTracer.
func (t *QUICTracer) TracerForConnection(
// TraceForConnection creates a tracer for a QUIC connection.
func (t *Tracer) TraceForConnection(
_ context.Context,
_ logging.Perspective,
_ logging.ConnectionID,
) (connTracer *logging.ConnectionTracer) {
t.mu.Lock()
defer t.mu.Unlock()
_ bool,
_ quic.ConnectionID,
) (tracer qlogwriter.Trace) {
newTracer := &quicTracer{recorder: &headerRecorder{}}
t.tracers = append(t.tracers, newTracer)
ct := &quicConnTracer{
parentMu: t.mu,
}
t.connTracers = append(t.connTracers, ct)
return &logging.ConnectionTracer{
SentLongHeaderPacket: ct.SentLongHeaderPacket,
}
return newTracer
}
// ConnectionsInfo returns the traced connections' information.
func (t *QUICTracer) ConnectionsInfo() (conns []*QUICConnInfo) {
t.mu.Lock()
defer t.mu.Unlock()
// ConnectionsInfo returns info for all traced connections.
func (t *Tracer) ConnectionsInfo() (res []*connInfo) {
res = make([]*connInfo, 0, len(t.tracers))
for _, tracer := range t.tracers {
hdrs := tracer.recorder.headersWithLock()
for _, tracer := range t.connTracers {
conns = append(conns, &QUICConnInfo{
headers: tracer.headers,
res = append(res, &connInfo{
headers: hdrs,
})
}
return conns
return res
}
// QUICConnInfo contains information about packets that were recorded by a
// [QUICTracer].
type QUICConnInfo struct {
headers []*logging.Header
// connInfo contains all trace event headers recorded for single connection.
type connInfo struct {
headers []qlog.PacketHeader
}
// Is0RTT returns true if this connection's packets contain 0-RTT packets.
func (c *QUICConnInfo) Is0RTT() (ok bool) {
// Is0RTT returns true if the connection used 0-RTT packets.
func (c *connInfo) Is0RTT() (ok bool) {
for _, hdr := range c.headers {
if t := logging.PacketTypeFromHeader(hdr); t == logging.PacketType0RTT {
if hdr.PacketType == qlog.PacketType0RTT {
return true
}
}
@@ -74,23 +59,60 @@ func (c *QUICConnInfo) Is0RTT() (ok bool) {
return false
}
// quicConnTracer is a helper structure for tracing QUIC connections.
type quicConnTracer struct {
parentMu *sync.Mutex
headers []*logging.Header
// quicTracer is an implementation of [qlogwriter.Trace] for testing.
type quicTracer struct {
// recorder is used for recording trace events. It must not be nil.
recorder *headerRecorder
}
// SentLongHeaderPacket is a method for the [logging.ConnectionTracer] method.
func (q *quicConnTracer) SentLongHeaderPacket(
extHdr *logging.ExtendedHeader,
_ logging.ByteCount,
_ logging.ECN,
_ *logging.AckFrame,
_ []logging.Frame,
) {
q.parentMu.Lock()
defer q.parentMu.Unlock()
// type check
var _ qlogwriter.Trace = (*quicTracer)(nil)
hdr := extHdr.Header
q.headers = append(q.headers, &hdr)
// AddProducer implements the [qlogwriter.Trace] interface for *quicTracer.
func (q *quicTracer) AddProducer() (recorder qlogwriter.Recorder) {
return q.recorder
}
// SupportsSchemas implements the [qlogwriter.Trace] interface for *quicTracer.
func (q *quicTracer) SupportsSchemas(_ string) (ok bool) {
return false
}
// Recorder is an implementation of [qlogwriter.Recorder] that records
// [qlog.PacketSent] events headers.
type headerRecorder struct {
headers []qlog.PacketHeader
mx sync.Mutex
}
// type check
var _ qlogwriter.Recorder = (*headerRecorder)(nil)
// RecordEvent implements the [qlogwriter.Recorder] interface for
// *headerRecorder.
func (r *headerRecorder) RecordEvent(ev qlogwriter.Event) {
event, ok := ev.(qlog.PacketSent)
if !ok {
return
}
r.mx.Lock()
defer r.mx.Unlock()
r.headers = append(r.headers, event.Header)
}
// headersWithLock returns copy of recorded headers. It is safe for concurrent
// use.
func (r *headerRecorder) headersWithLock() (res []qlog.PacketHeader) {
r.mx.Lock()
defer r.mx.Unlock()
return slices.Clone(r.headers)
}
// Close implements the [qlogwriter.Recorder] interface for
// *headerRecorder.
func (*headerRecorder) Close() (err error) {
return nil
}

View File

@@ -1,9 +1,9 @@
module github.com/AdguardTeam/AdGuardDNS/internal/dnsserver
go 1.25.3
go 1.25.5
require (
github.com/AdguardTeam/golibs v0.35.2
github.com/AdguardTeam/golibs v0.35.3
github.com/ameshkov/dnscrypt/v2 v2.4.0
github.com/ameshkov/dnsstamps v1.0.3
github.com/bluele/gcache v0.0.2
@@ -12,10 +12,10 @@ require (
github.com/panjf2000/ants/v2 v2.11.3
github.com/patrickmn/go-cache v2.1.1-0.20191004192108-46f407853014+incompatible
github.com/prometheus/client_golang v1.23.2
github.com/quic-go/quic-go v0.55.0
github.com/quic-go/quic-go v0.56.0
github.com/stretchr/testify v1.11.1
golang.org/x/net v0.46.0
golang.org/x/sys v0.37.0
golang.org/x/net v0.47.0
golang.org/x/sys v0.38.0
)
require (
@@ -26,18 +26,19 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.1 // indirect
github.com/prometheus/procfs v0.17.0 // indirect
github.com/prometheus/common v0.67.2 // indirect
github.com/prometheus/procfs v0.19.2 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
go.uber.org/mock v0.6.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
golang.org/x/crypto v0.43.0 // indirect
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b // indirect
golang.org/x/mod v0.29.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/text v0.30.0 // indirect
golang.org/x/tools v0.38.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/mod v0.30.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.39.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -1,7 +1,4 @@
github.com/AdguardTeam/golibs v0.35.0 h1:O990+tbZ5W5yB0ybtaUJy4FUb0bXxyzeUC7t8cr1pCg=
github.com/AdguardTeam/golibs v0.35.0/go.mod h1:y552twxCtvOD8KKQ7ESjo10KZBAE+HSj24yAuAvz9IA=
github.com/AdguardTeam/golibs v0.35.2 h1:GVlx/CiCz5ZXQmyvFrE3JyeGsgubE8f4rJvRshYJVVs=
github.com/AdguardTeam/golibs v0.35.2/go.mod h1:p/l6tG7QCv+Hi5yVpv1oZInoatRGOWoyD1m+Ume+ZNY=
github.com/AdguardTeam/golibs v0.35.3 h1:DI0ffHyL3tFZ2UBEji3Aah7IvFwM5nY5yZoGvs1bnPY=
github.com/ameshkov/dnscrypt/v2 v2.4.0 h1:if6ZG2cuQmcP2TwSY+D0+8+xbPfoatufGlOQTMNkI9o=
github.com/ameshkov/dnscrypt/v2 v2.4.0/go.mod h1:WpEFV2uhebXb8Jhes/5/fSdpmhGV8TL22RDaeWwV6hI=
github.com/ameshkov/dnsstamps v1.0.3 h1:Srzik+J9mivH1alRACTbys2xOxs0lRH9qnTA7Y1OYVo=
@@ -37,14 +34,11 @@ github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.1 h1:OTSON1P4DNxzTg4hmKCc37o4ZAZDv0cfXLkOt0oEowI=
github.com/prometheus/common v0.67.1/go.mod h1:RpmT9v35q2Y+lsieQsdOh5sXZ6ajUGC8NjZAmr8vb0Q=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/prometheus/common v0.67.2 h1:PcBAckGFTIHt2+L3I33uNRTlKTplNzFctXcWhPyAEN8=
github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
github.com/quic-go/qpack v0.5.1/go.mod h1:+PC4XFrEskIVkcLzpEkbLqq1uCoxPhQuvK5rH1ZgaEg=
github.com/quic-go/quic-go v0.55.0 h1:zccPQIqYCXDt5NmcEabyYvOnomjs8Tlwl7tISjJh9Mk=
github.com/quic-go/quic-go v0.55.0/go.mod h1:DR51ilwU1uE164KuWXhinFcKWGlEjzys2l8zUl5Ss1U=
github.com/quic-go/quic-go v0.56.0 h1:q/TW+OLismmXAehgFLczhCDTYB3bFmua4D9lsNBWxvY=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
@@ -57,29 +51,15 @@ go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9 h1:TQwNpfvNkxAVlItJf6Cr5JTsVZoC/Sj7K3OZv2Pc14A=
golang.org/x/exp v0.0.0-20251002181428-27f1f14c8bb9/go.mod h1:TwQYMMnGpvZyc+JpB/UAuTNIsVJifOlSkrZkhcvpVUk=
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.28.0 h1:gQBtGhjxykdjY9YhZpSlZIsbnaE2+PgjfLWUQTnoZ1U=
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.45.0 h1:RLBg5JKixCy82FtLJpeNlVM0nrSqpCRYzVU1n8kj0tM=
golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/tools v0.37.0 h1:DVSRzp7FwePZW356yEAChSdNcQo6Nsp+fex1SUW09lE=
golang.org/x/tools v0.37.0/go.mod h1:MBN5QPQtLMHVdvsbtarmTNukZDdgwdwlO5qGacAzF0w=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -278,7 +278,7 @@ func (s *ServerDNS) unblockTCPConns(ctx context.Context) {
s.tcpConns.Range(func(conn net.Conn) (cont bool) {
err := conn.SetReadDeadline(time.Unix(1, 0))
if err != nil {
if err != nil && !errors.Is(err, net.ErrClosed) {
s.baseLogger.WarnContext(ctx, "failed to unblock conn", slogutil.KeyError, err)
}

View File

@@ -362,12 +362,12 @@ func TestServerHTTPS_0RTT(t *testing.T) {
return srv.Shutdown(context.Background())
})
quicTracer := dnsservertest.NewQUICTracer()
quicTracer := dnsservertest.Tracer{}
// quicConfig with TokenStore set so that 0-RTT was enabled.
quicConfig := &quic.Config{
TokenStore: quic.NewLRUTokenStore(1, 10),
Tracer: quicTracer.TracerForConnection,
Tracer: quicTracer.TraceForConnection,
}
// ClientSessionCache in the tls.Config must also be set for 0-RTT to work.

View File

@@ -117,12 +117,12 @@ func TestServerQUIC_integration_0RTT(t *testing.T) {
return srv.Shutdown(testutil.ContextWithTimeout(t, testTimeout))
})
quicTracer := dnsservertest.NewQUICTracer()
quicTracer := dnsservertest.Tracer{}
// quicConfig with TokenStore set so that 0-RTT was enabled.
quicConfig := &quic.Config{
TokenStore: quic.NewLRUTokenStore(1, 10),
Tracer: quicTracer.TracerForConnection,
Tracer: quicTracer.TraceForConnection,
}
// ClientSessionCache in the tls.Config must also be set for 0-RTT to work.
@@ -143,6 +143,33 @@ func TestServerQUIC_integration_0RTT(t *testing.T) {
require.True(t, conns[1].Is0RTT())
}
// testQUICExchange initializes a new QUIC connection and sends one test DNS
// query through it.
func testQUICExchange(
t *testing.T,
addr *net.UDPAddr,
tlsConfig *tls.Config,
quicConfig *quic.Config,
) {
conn, err := quic.DialAddrEarly(context.Background(), addr.String(), tlsConfig, quicConfig)
require.NoError(t, err)
defer testutil.CleanupAndRequireSuccess(t, func() (err error) {
return conn.CloseWithError(0, "")
})
defer func(conn *quic.Conn, code quic.ApplicationErrorCode, s string) {
_ = conn.CloseWithError(code, s)
}(conn, 0, "")
// Create a test message.
req := dnsservertest.NewReq("example.org.", dns.TypeA, dns.ClassINET)
req.RecursionDesired = true
resp := requireSendQUICMessage(t, conn, req)
require.NotNil(t, resp)
}
func TestServerQUIC_integration_largeQuery(t *testing.T) {
tlsConfig := dnsservertest.CreateServerTLSConfig("example.org")
srv, addr, err := dnsservertest.RunLocalQUICServer(
@@ -181,33 +208,6 @@ func TestServerQUIC_integration_largeQuery(t *testing.T) {
require.True(t, resp.Response)
}
// testQUICExchange initializes a new QUIC connection and sends one test DNS
// query through it.
func testQUICExchange(
t *testing.T,
addr *net.UDPAddr,
tlsConfig *tls.Config,
quicConfig *quic.Config,
) {
conn, err := quic.DialAddrEarly(context.Background(), addr.String(), tlsConfig, quicConfig)
require.NoError(t, err)
defer testutil.CleanupAndRequireSuccess(t, func() (err error) {
return conn.CloseWithError(0, "")
})
defer func(conn *quic.Conn, code quic.ApplicationErrorCode, s string) {
_ = conn.CloseWithError(code, s)
}(conn, 0, "")
// Create a test message.
req := dnsservertest.NewReq("example.org.", dns.TypeA, dns.ClassINET)
req.RecursionDesired = true
resp := requireSendQUICMessage(t, conn, req)
require.NotNil(t, resp)
}
// sendQUICMessage is a test helper that sends a test QUIC message.
func sendQUICMessage(
conn *quic.Conn,

View File

@@ -63,7 +63,9 @@ func TestNewHandlers(t *testing.T) {
fltGrp := &agd.FilteringGroup{
FilterConfig: &filter.ConfigGroup{
Parental: &filter.ConfigParental{},
Parental: &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
},
RuleList: &filter.ConfigRuleList{
IDs: []filter.ID{dnssvctest.FilterListID1},
Enabled: true,

View File

@@ -75,8 +75,10 @@ func newTestService(
prof := &agd.Profile{
FilterConfig: &filter.ConfigClient{
Custom: &filter.ConfigCustom{},
Parental: &filter.ConfigParental{},
Custom: &filter.ConfigCustom{},
Parental: &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
},
RuleList: &filter.ConfigRuleList{
IDs: []filter.ID{dnssvctest.FilterListID1},
Enabled: true,
@@ -206,7 +208,9 @@ func newTestService(
fltGrp := &agd.FilteringGroup{
FilterConfig: &filter.ConfigGroup{
Parental: &filter.ConfigParental{},
Parental: &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
},
RuleList: &filter.ConfigRuleList{
IDs: []filter.ID{dnssvctest.FilterListID1},
Enabled: true,

View File

@@ -342,7 +342,9 @@ func newContext(
})
fltConf := &filter.ConfigGroup{
Parental: &filter.ConfigParental{},
Parental: &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
},
RuleList: &filter.ConfigRuleList{
IDs: []filter.ID{
dnssvctest.FilterListID1,

View File

@@ -0,0 +1,41 @@
package filter
import (
"fmt"
"github.com/AdguardTeam/AdGuardDNS/internal/agdvalidate"
"github.com/AdguardTeam/golibs/errors"
)
// CategoryID is the ID of a category filter. It is an opaque string. It must
// be a valid FilterID.
type CategoryID string
// The maximum and minimum lengths of a CategoryID.
const (
MaxCategoryIDLen = 128
MinCategoryIDLen = 1
)
// CategoryIDNone is a zero value for category filter.
const CategoryIDNone CategoryID = ""
// NewCategoryID converts a simple string into a CategoryID and makes sure that
// it's valid. This should be preferred to a simple type conversion.
func NewCategoryID(s string) (id CategoryID, err error) {
defer func() { err = errors.Annotate(err, "bad category id %q: %w", s) }()
err = agdvalidate.Inclusion(len(s), MinCategoryIDLen, MaxCategoryIDLen, agdvalidate.UnitByte)
if err != nil {
return CategoryIDNone, err
}
// Allow only the printable, non-whitespace ASCII characters. Technically
// we only need to exclude carriage return, line feed, and slash characters,
// but let's be more strict just in case.
if i, r := agdvalidate.FirstNonIDRune(s, true); i != -1 {
return CategoryIDNone, fmt.Errorf("bad rune %q at index %d", r, i)
}
return CategoryID(s), nil
}

View File

@@ -70,6 +70,10 @@ type Custom interface {
// ConfigParental is the configuration for parental-control filtering.
type ConfigParental struct {
// Categories is the configuration for category filtering. It must not be
// nil. It is ignored if [ConfigParental.Enabled] is false.
Categories *ConfigCategories
// PauseSchedule is the schedule for the pausing of the parental-control
// filtering. If it is nil, the parental-control filtering is never paused.
// It is ignored if [ConfigParental.Enabled] is false.
@@ -96,6 +100,16 @@ type ConfigParental struct {
SafeSearchYouTubeEnabled bool
}
// ConfigCategories is the configuration for category filtering.
type ConfigCategories struct {
// IDs are the IDs of the filtering categories used for this filtering
// configuration. They are ignored if [ConfigRuleList.Enabled] is false.
IDs []CategoryID
// Enabled shows whether the category filtering is enabled.
Enabled bool
}
// ConfigRuleList is the configuration for rule-list based filtering.
type ConfigRuleList struct {
// IDs are the IDs of the filtering rule lists used for this filtering

View File

@@ -0,0 +1,107 @@
package filterstorage
import (
"context"
"fmt"
"log/slog"
"net/url"
"github.com/AdguardTeam/AdGuardDNS/internal/agdhttp"
"github.com/AdguardTeam/AdGuardDNS/internal/errcoll"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
)
// categoryResp is the struct for the JSON response from a category filter index
// API.
//
// TODO(a.garipov): Consider exporting for tests?
type categoryResp struct {
Filters map[string]*categoryRespFilter `json:"filters"`
}
// categoryRespFilter is the struct for a filter from the JSON response from a
// category filter index API.
//
// NOTE: Keep these strings instead of unmarshalers to make sure that objects
// with invalid data do not prevent valid objects from being used.
type categoryRespFilter struct {
// DownloadURL contains the URL to use for downloading this filter.
DownloadURL string `json:"downloadUrl"`
}
// validate returns an error if f is invalid.
func (f *categoryRespFilter) validate(categoryName string) (err error) {
if f == nil {
return errors.ErrNoValue
}
var errs []error
// TODO(a.garipov): Use urlutil.URL or add IsValidURLString to golibs.
if f.DownloadURL == "" {
errs = append(errs, fmt.Errorf("downloadUrl: %w", errors.ErrEmptyValue))
}
if _, err = filter.NewCategoryID(categoryName); err != nil {
errs = append(errs, err)
}
return errors.Join(errs...)
}
// categoryData is the data of a single item in the category filtering-rule
// index response.
type categoryData struct {
url *url.URL
id filter.CategoryID
}
// toInternal converts the filters from the index to []*categoryData. All
// errors are logged and collected. logger and errColl must not be nil.
func (r *categoryResp) toInternal(
ctx context.Context,
logger *slog.Logger,
errColl errcoll.Interface,
) (fls []*categoryData) {
ids := container.NewMapSet[filter.CategoryID]()
fls = make([]*categoryData, 0, len(r.Filters))
for cat, fl := range r.Filters {
err := fl.validate(cat)
if err != nil {
err = fmt.Errorf("validating category filter %q: %w", cat, err)
errcoll.Collect(ctx, errColl, logger, "category index response", err)
continue
}
u, err := agdhttp.ParseHTTPURL(fl.DownloadURL)
if err != nil {
err = fmt.Errorf("validating url: %w", err)
errcoll.Collect(ctx, errColl, logger, "category index response", err)
continue
}
// Use a simple conversion, since [*categoryRespFilter.validate] has
// already made sure that the ID is valid.
id := filter.CategoryID(cat)
if ids.Has(id) {
err = fmt.Errorf("category id: %w: %q", errors.ErrDuplicated, cat)
errcoll.Collect(ctx, errColl, logger, "category index response", err)
continue
}
ids.Add(id)
fls = append(fls, &categoryData{
url: u,
id: id,
})
}
return fls
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/errcoll"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/hashprefix"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/domain"
"github.com/AdguardTeam/golibs/timeutil"
"github.com/c2h5oh/datasize"
)
@@ -27,6 +28,10 @@ type Config struct {
// default filter storage. It must not be nil
BlockedServices *BlockedServicesConfig
// CategoryDomainsIndex is the domain-list configuration of a category
// filter storage. It must not be nil.
CategoryDomainsIndex *IndexConfig
// Custom is the configuration of a custom filters storage for a default
// filter storage. It must not be nil
Custom *CustomConfig
@@ -35,9 +40,9 @@ type Config struct {
// storage. It must not be nil
HashPrefix *HashPrefixConfig
// RuleLists is the rule-list configuration for a default filter storage.
// It must not be nil.
RuleLists *RuleListsConfig
// RuleListsIndex is the rule-list index configuration for a default filter
// storage. It must not be nil.
RuleListsIndex *IndexConfig
// SafeSearchGeneral is the general safe-search configuration for a default
// filter storage. It must not be nil.
@@ -58,16 +63,26 @@ type Config struct {
// It must not be nil.
Clock timeutil.Clock
// DomainMetrics is the metrics for the domain filters in the storage. It
// must not be nil, if [Config.CategoryDomainsIndex.Enabled] is true.
DomainMetrics domain.Metrics
// ErrColl is used to collect non-critical and rare errors as well as
// refresh errors. It must not be nil.
ErrColl errcoll.Interface
// Metrics are the metrics for the filters in the storage.
// Metrics are the metrics for the filters in the storage. It must not be
// nil.
Metrics filter.Metrics
// CacheDir is the path to the directory where the cached filter files are
// put. It must not be empty and the directory must exist.
CacheDir string
// DomainFilterSubDomainNum defines how many labels should be hashed to
// match against a hash prefix filter. It must be positive and fit into
// int.
DomainFilterSubDomainNum uint
}
// BlockedServicesConfig is the blocked-service filter configuration for a
@@ -130,30 +145,30 @@ type HashPrefixConfig struct {
NewlyRegistered *hashprefix.Filter
}
// RuleListsConfig is the rule-list configuration for a default filter storage.
type RuleListsConfig struct {
// IndexURL is the URL of the rule-list filter index. It must not be
// modified after calling [New]. It must not be nil.
// IndexConfig is the filters index configuration for a default filter storage.
type IndexConfig struct {
// IndexURL is the URL of the filter index. It must not be modified after
// calling [New]. It must not be nil.
IndexURL *url.URL
// IndexMaxSize is the maximum size of the downloadable filter-index
// IndexMaxSize is the maximum size of the downloadable filter index
// content. It must be positive.
IndexMaxSize datasize.ByteSize
// MaxSize is the maximum size of the content of a single rule-list filter.
// It must be positive.
// MaxSize is the maximum size of the content of a single filter index. It
// must be positive.
MaxSize datasize.ByteSize
// IndexRefreshTimeout is the timeout for the update of the rule-list filter
// index. It must be positive.
// IndexRefreshTimeout is the timeout for the update of the filter index.
// It must be positive.
IndexRefreshTimeout time.Duration
// IndexStaleness is the time after which the cached index file is
// considered stale. It must be positive.
IndexStaleness time.Duration
// RefreshTimeout is the timeout for the update of a single rule-list
// filter. It must be positive.
// RefreshTimeout is the timeout for the update of a single filter. It must
// be positive.
RefreshTimeout time.Duration
// Staleness is the time after which the cached filter files are considered
@@ -161,11 +176,14 @@ type RuleListsConfig struct {
Staleness time.Duration
// ResultCacheCount is the count of items to keep in the LRU result cache of
// a single rule-list filter. It must be greater than zero.
// a single filter. It must be greater than zero.
ResultCacheCount int
// ResultCacheEnabled enables caching of results of the rule-list filters.
// ResultCacheEnabled enables caching of results of the filters.
ResultCacheEnabled bool
// Enabled shows whether the filters index is enabled.
Enabled bool
}
// SafeSearchConfig is the single safe-search configuration for a default filter

View File

@@ -14,6 +14,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/hashprefix"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/composite"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/domain"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/refreshable"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/rulelist"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/safesearch"
@@ -41,26 +42,39 @@ type Default struct {
safeSearchGeneral *safesearch.Filter
safeSearchYouTube *safesearch.Filter
// ruleListsMu protects [Default.ruleLists].
// domainFiltersMu protects domainFilters.
domainFiltersMu *sync.RWMutex
domainFilters domainFilters
// ruleListsMu protects ruleLists.
ruleListsMu *sync.RWMutex
ruleLists ruleLists
ruleListIdxRefr *refreshable.Refreshable
categoryDomainsIdxRefr *refreshable.Refreshable
ruleListIdxRefr *refreshable.Refreshable
cacheManager agdcache.Manager
clock timeutil.Clock
errColl errcoll.Interface
metrics filter.Metrics
cacheManager agdcache.Manager
clock timeutil.Clock
domainMetrics domain.Metrics
errColl errcoll.Interface
metrics filter.Metrics
cacheDir string
categoryDomainsStaleness time.Duration
categoryDomainsRefreshTimeout time.Duration
ruleListStaleness time.Duration
ruleListRefreshTimeout time.Duration
ruleListMaxSize datasize.ByteSize
categoryDomainsMaxSize datasize.ByteSize
ruleListMaxSize datasize.ByteSize
ruleListResCacheCount int
serviceResCacheCount int
categoryDomainsResCacheCount int
ruleListResCacheCount int
serviceResCacheCount int
domainFilterSubDomainNum uint
ruleListCacheEnabled bool
serviceResCacheEnabled bool
@@ -69,6 +83,9 @@ type Default struct {
// ruleLists is convenient alias for an ID to filter mapping.
type ruleLists = map[filter.ID]*rulelist.Refreshable
// domainFilters is convenient alias for a category ID to filter mapping.
type domainFilters = map[filter.CategoryID]*domain.Filter
// New returns a new default filter storage ready for initial refresh with
// [Default.RefreshInitial]. c must not be nil.
func New(c *Config) (s *Default, err error) {
@@ -89,30 +106,46 @@ func New(c *Config) (s *Default, err error) {
safeSearchGeneral: nil,
safeSearchYouTube: nil,
domainFiltersMu: &sync.RWMutex{},
// Initialized in [Default.RefreshInitial].
domainFilters: nil,
ruleListsMu: &sync.RWMutex{},
// Initialized in [Default.RefreshInitial].
ruleLists: nil,
// Initialized in [Default.initCategoryDomainIdxRefr].
categoryDomainsIdxRefr: nil,
// Initialized in [Default.initRuleListRefr].
ruleListIdxRefr: nil,
cacheManager: c.CacheManager,
clock: c.Clock,
errColl: c.ErrColl,
metrics: c.Metrics,
cacheManager: c.CacheManager,
clock: c.Clock,
domainMetrics: c.DomainMetrics,
errColl: c.ErrColl,
metrics: c.Metrics,
cacheDir: c.CacheDir,
ruleListStaleness: c.RuleLists.Staleness,
ruleListRefreshTimeout: c.RuleLists.RefreshTimeout,
categoryDomainsStaleness: c.CategoryDomainsIndex.Staleness,
categoryDomainsRefreshTimeout: c.CategoryDomainsIndex.RefreshTimeout,
ruleListMaxSize: c.RuleLists.MaxSize,
ruleListStaleness: c.RuleListsIndex.Staleness,
ruleListRefreshTimeout: c.RuleListsIndex.RefreshTimeout,
ruleListResCacheCount: c.RuleLists.ResultCacheCount,
serviceResCacheCount: c.BlockedServices.ResultCacheCount,
categoryDomainsMaxSize: c.CategoryDomainsIndex.MaxSize,
ruleListMaxSize: c.RuleListsIndex.MaxSize,
ruleListCacheEnabled: c.RuleLists.ResultCacheEnabled,
categoryDomainsResCacheCount: c.CategoryDomainsIndex.ResultCacheCount,
ruleListResCacheCount: c.RuleListsIndex.ResultCacheCount,
serviceResCacheCount: c.BlockedServices.ResultCacheCount,
domainFilterSubDomainNum: c.DomainFilterSubDomainNum,
ruleListCacheEnabled: c.RuleListsIndex.ResultCacheEnabled,
serviceResCacheEnabled: c.BlockedServices.ResultCacheEnabled,
}
@@ -140,7 +173,13 @@ func (s *Default) init(c *Config) (err error) {
errs = append(errs, err)
}
err = s.initRuleListRefr(c.RuleLists)
err = s.initRuleListRefr(c.RuleListsIndex)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
errs = append(errs, err)
}
err = s.initCategoryDomainIdxRefr(c.CategoryDomainsIndex)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
errs = append(errs, err)
@@ -231,13 +270,17 @@ func newSafeSearch(
// initRuleListRefr initializes the rule-list refresher in s. c must not be
// nil.
func (s *Default) initRuleListRefr(c *RuleListsConfig) (err error) {
func (s *Default) initRuleListRefr(c *IndexConfig) (err error) {
if !c.Enabled {
return nil
}
s.ruleListIdxRefr, err = refreshable.New(&refreshable.Config{
Logger: s.baseLogger.With(
slogutil.KeyPrefix, path.Join("filters", string(FilterIDRuleListIndex)),
),
URL: c.IndexURL,
ID: filter.ID(FilterIDRuleListIndex),
ID: FilterIDRuleListIndex,
CachePath: filepath.Join(s.cacheDir, indexFileNameRuleLists),
Staleness: c.IndexStaleness,
Timeout: c.IndexRefreshTimeout,
@@ -250,6 +293,32 @@ func (s *Default) initRuleListRefr(c *RuleListsConfig) (err error) {
return nil
}
// initCategoryDomainIdxRefr initializes the category filter domain-list
// refresher in s. c must not be nil.
func (s *Default) initCategoryDomainIdxRefr(c *IndexConfig) (err error) {
if !c.Enabled {
return nil
}
s.categoryDomainsIdxRefr, err = refreshable.New(&refreshable.Config{
Logger: s.baseLogger.With(
slogutil.KeyPrefix,
path.Join("category_filters", string(FilterIDCategoryDomainsIndex)),
),
URL: c.IndexURL,
ID: FilterIDCategoryDomainsIndex,
CachePath: filepath.Join(s.cacheDir, indexFileNameCategoryDomains),
Staleness: c.IndexStaleness,
Timeout: c.IndexRefreshTimeout,
MaxSize: c.IndexMaxSize,
})
if err != nil {
return fmt.Errorf("category domain-list index: %w", err)
}
return nil
}
// type check
var _ filter.Storage = (*Default)(nil)
@@ -331,6 +400,26 @@ func (s *Default) setEnabledParental(
if len(c.BlockedServices) > 0 && s.services != nil {
compConf.ServiceLists = s.services.RuleLists(ctx, c.BlockedServices)
}
s.setDomainFilters(compConf, c.Categories)
}
// setDomainFilters sets the category domain filters in compConf from c. c must
// not be nil.
func (s *Default) setDomainFilters(compConf *composite.Config, c *filter.ConfigCategories) {
if !c.Enabled || len(c.IDs) == 0 {
return
}
s.domainFiltersMu.RLock()
defer s.domainFiltersMu.RUnlock()
for _, id := range c.IDs {
fl := s.domainFilters[id]
if fl != nil {
compConf.CategoryFilters = append(compConf.CategoryFilters, fl)
}
}
}
// setRuleLists sets the rule-list filters in compConf from c. c must not be

View File

@@ -71,9 +71,9 @@ func TestNew(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
c := newDisabledConfig(t, newConfigRuleLists(indexURL))
c := newDisabledConfig(t, newIndexConfig(indexURL), newIndexConfig(indexURL))
c.BlockedServices = tc.services
c.RuleLists = newConfigRuleLists(indexURL)
c.RuleListsIndex = newIndexConfig(indexURL)
c.SafeSearchGeneral = tc.safeSearchGen
c.SafeSearchYouTube = tc.safeSearchYT
s, err := filterstorage.New(c)
@@ -250,6 +250,7 @@ func TestDefault_ForConfig_common(t *testing.T) {
// features properly enabled or disabled.
func newFltConfigParental(hpAdult, svc, ssGen, ssYT bool) (c *filter.ConfigParental) {
c = &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
Enabled: svc || hpAdult || ssGen || ssYT,
AdultBlockingEnabled: hpAdult,
SafeSearchGeneralEnabled: ssGen,
@@ -300,9 +301,7 @@ func newFltConfigCli(
sbConf *filter.ConfigSafeBrowsing,
) (c *filter.ConfigClient) {
return &filter.ConfigClient{
Custom: &filter.ConfigCustom{
Enabled: false,
},
Custom: &filter.ConfigCustom{},
Parental: pConf,
RuleList: rlConf,
SafeBrowsing: sbConf,

View File

@@ -12,6 +12,7 @@ import (
const (
FilterIDBlockedServiceIndex filter.ID = "blocked_service_index"
FilterIDRuleListIndex filter.ID = "rule_list_index"
FilterIDCategoryDomainsIndex filter.ID = "category_domains_index"
FilterIDStandardProfileAccess filter.ID = "standard_profile_access"
)
@@ -19,6 +20,7 @@ const (
const (
indexFileNameBlockedServices = "services.json"
indexFileNameRuleLists = "filters.json"
indexFileNameCategoryDomains = "category_filters.json"
indexFileNameStandardProfileAccess = "standard_profile_access.json"
)

View File

@@ -10,6 +10,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/dnsserver/dnsservertest"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/filterstorage"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/domain"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/filtertest"
"github.com/AdguardTeam/golibs/logutil/slogutil"
"github.com/AdguardTeam/golibs/testutil"
@@ -69,6 +70,8 @@ var (
//
// - A rule-list index with one filter with ID [filtertest.RuleListID1] and a
// rule to block [filtertest.HostBlocked].
// - A category index with one filter with ID [filtertest.CategoryID] and a
// rule to block [filtertest.HostBlocked].
// - Safe-search filters, both general and YouTube, with rules for
// [filtertest.HostSafeSearchGeneral] and
// [filtertest.HostSafeSearchYouTube].
@@ -78,9 +81,10 @@ var (
// [filtertest.HostDangerous], and [filtertest.HostNewlyRegistered].
func newDefault(tb testing.TB) (s *filterstorage.Default) {
const (
blockData = filtertest.RuleBlockStr + "\n"
ssGenData = filtertest.RuleSafeSearchGeneralHostStr + "\n"
ssYTData = filtertest.RuleSafeSearchYouTubeStr + "\n"
blockData = filtertest.RuleBlockStr + "\n"
blockDomain = filtertest.HostBlocked + "\n"
ssGenData = filtertest.RuleSafeSearchGeneralHostStr + "\n"
ssYTData = filtertest.RuleSafeSearchYouTubeStr + "\n"
)
rlCh := make(chan unit, 1)
@@ -95,6 +99,13 @@ func newDefault(tb testing.TB) (s *filterstorage.Default) {
http.StatusOK,
)
catCh := make(chan unit, 1)
_, catURL := filtertest.PrepareRefreshable(tb, catCh, blockDomain, http.StatusOK)
catIdxData := filtertest.NewCategoryIndex(catURL.String())
catIdxCh := make(chan unit, 1)
_, catIdxURL := filtertest.PrepareRefreshable(tb, catIdxCh, string(catIdxData), http.StatusOK)
ssGenCh, ssYTCh := make(chan unit, 1), make(chan unit, 1)
_, safeSearchGenURL := filtertest.PrepareRefreshable(tb, ssGenCh, ssGenData, http.StatusOK)
_, safeSearchYTURL := filtertest.PrepareRefreshable(tb, ssYTCh, ssYTData, http.StatusOK)
@@ -107,7 +118,7 @@ func newDefault(tb testing.TB) (s *filterstorage.Default) {
http.StatusOK,
)
c := newDisabledConfig(tb, newConfigRuleLists(ruleListIdxURL))
c := newDisabledConfig(tb, newIndexConfig(ruleListIdxURL), newIndexConfig(catIdxURL))
c.BlockedServices = newConfigBlockedServices(svcIdxURL)
c.HashPrefix = &filterstorage.HashPrefixConfig{
Adult: filtertest.NewHashprefixFilter(tb, filter.IDAdultBlocking),
@@ -131,6 +142,8 @@ func newDefault(tb testing.TB) (s *filterstorage.Default) {
testutil.RequireReceive(tb, rlCh, filtertest.Timeout)
testutil.RequireReceive(tb, rlIdxCh, filtertest.Timeout)
testutil.RequireReceive(tb, catCh, filtertest.Timeout)
testutil.RequireReceive(tb, catIdxCh, filtertest.Timeout)
testutil.RequireReceive(tb, ssGenCh, filtertest.Timeout)
testutil.RequireReceive(tb, ssYTCh, filtertest.Timeout)
testutil.RequireReceive(tb, svcIdxCh, filtertest.Timeout)
@@ -138,12 +151,16 @@ func newDefault(tb testing.TB) (s *filterstorage.Default) {
return s
}
// defaultSubDomainNum is default subDomainNumValue for filters.
const defaultSubDomainNum = 4
// newDisabledConfig returns a new [*filterstorage.Config] with fields related
// to filters set to disabled (if possible) and others, to the default test
// entities.
func newDisabledConfig(
tb testing.TB,
rlConf *filterstorage.RuleListsConfig,
rlConf *filterstorage.IndexConfig,
catConf *filterstorage.IndexConfig,
) (c *filterstorage.Config) {
tb.Helper()
@@ -153,11 +170,12 @@ func newDisabledConfig(
BlockedServices: &filterstorage.BlockedServicesConfig{
Enabled: false,
},
CategoryDomainsIndex: catConf,
Custom: &filterstorage.CustomConfig{
CacheCount: filtertest.CacheCount,
},
HashPrefix: &filterstorage.HashPrefixConfig{},
RuleLists: rlConf,
HashPrefix: &filterstorage.HashPrefixConfig{},
RuleListsIndex: rlConf,
SafeSearchGeneral: &filterstorage.SafeSearchConfig{
ID: filter.IDGeneralSafeSearch,
Enabled: false,
@@ -166,11 +184,13 @@ func newDisabledConfig(
ID: filter.IDYoutubeSafeSearch,
Enabled: false,
},
CacheManager: agdcache.EmptyManager{},
Clock: timeutil.SystemClock{},
ErrColl: agdtest.NewErrorCollector(),
Metrics: filter.EmptyMetrics{},
CacheDir: tb.TempDir(),
CacheManager: agdcache.EmptyManager{},
Clock: timeutil.SystemClock{},
DomainMetrics: domain.EmptyMetrics{},
ErrColl: agdtest.NewErrorCollector(),
Metrics: filter.EmptyMetrics{},
CacheDir: tb.TempDir(),
DomainFilterSubDomainNum: defaultSubDomainNum,
}
}
@@ -189,11 +209,11 @@ func newConfigBlockedServices(indexURL *url.URL) (c *filterstorage.BlockedServic
}
}
// newConfigRuleLists is a test helper that returns a new *ConfigRuleLists with
// the given index URL. The rest of the fields are set to the corresponding
// newIndexConfig is a test helper that returns a new *IndexConfig with the
// given index URL. The rest of the fields are set to the corresponding
// [filtertest] values.
func newConfigRuleLists(indexURL *url.URL) (c *filterstorage.RuleListsConfig) {
return &filterstorage.RuleListsConfig{
func newIndexConfig(indexURL *url.URL) (c *filterstorage.IndexConfig) {
return &filterstorage.IndexConfig{
IndexURL: indexURL,
IndexMaxSize: filtertest.FilterMaxSize,
MaxSize: filtertest.FilterMaxSize,
@@ -203,6 +223,7 @@ func newConfigRuleLists(indexURL *url.URL) (c *filterstorage.RuleListsConfig) {
Staleness: filtertest.Staleness,
ResultCacheCount: filtertest.CacheCount,
ResultCacheEnabled: true,
Enabled: true,
}
}

View File

@@ -11,11 +11,13 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/errcoll"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/domain"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/refreshable"
"github.com/AdguardTeam/AdGuardDNS/internal/filter/internal/rulelist"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/logutil/slogutil"
"github.com/AdguardTeam/golibs/service"
"golang.org/x/net/publicsuffix"
)
// type check
@@ -38,25 +40,18 @@ func (s *Default) Refresh(ctx context.Context) (err error) {
// If acceptStale is true, the cache files are used regardless of their
// staleness.
func (s *Default) refresh(ctx context.Context, acceptStale bool) (err error) {
resp, err := s.loadIndex(ctx, acceptStale)
newRuleLists, err := s.loadRuleLists(ctx, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return err
}
s.logger.InfoContext(ctx, "loaded index", "num_filters", len(resp.Filters))
fls := resp.toInternal(ctx, s.logger, s.errColl)
s.logger.InfoContext(ctx, "validated lists", "num_lists", len(fls))
newRuleLists, err := s.refreshRuleLists(ctx, fls, acceptStale)
newDomainFilters, err := s.loadCategoryFilters(ctx, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return err
}
s.logger.InfoContext(ctx, "compiled lists", "num_lists", len(newRuleLists))
err = s.refreshServices(ctx, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
@@ -70,16 +65,80 @@ func (s *Default) refresh(ctx context.Context, acceptStale bool) (err error) {
}
s.resetRuleLists(newRuleLists)
s.resetDomainFilters(newDomainFilters)
return nil
}
// loadIndex fetches, decodes, and returns the filter list index data of the
// storage. resp.Filters are sorted.
func (s *Default) loadIndex(
// loadRuleLists loads the rule-lists from the storage. If acceptStale is true,
// the cache files are used regardless of their staleness.
func (s *Default) loadRuleLists(ctx context.Context, acceptStale bool) (rls ruleLists, err error) {
if s.ruleListIdxRefr == nil {
s.logger.DebugContext(ctx, "loading index skipped")
return nil, nil
}
resp, err := s.loadIndex(ctx, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return nil, err
}
s.logger.InfoContext(ctx, "loaded index", "num_filters", len(resp.Filters))
fls := resp.toInternal(ctx, s.logger, s.errColl)
s.logger.InfoContext(ctx, "validated lists", "num_lists", len(fls))
newRuleLists, err := s.refreshRuleLists(ctx, fls, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return nil, err
}
s.logger.InfoContext(ctx, "compiled lists", "num_lists", len(newRuleLists))
return newRuleLists, nil
}
// loadCategoryFilters loads the category filter domain-lists from the storage.
// If acceptStale is true, the cache files are used regardless of their
// staleness.
func (s *Default) loadCategoryFilters(
ctx context.Context,
acceptStale bool,
) (resp *indexResp, err error) {
) (dfs domainFilters, err error) {
if s.categoryDomainsIdxRefr == nil {
s.logger.DebugContext(ctx, "loading category index skipped")
return nil, nil
}
resp, err := s.loadCategoryIndex(ctx, acceptStale)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return nil, err
}
s.logger.InfoContext(ctx, "loaded category index")
fls := resp.toInternal(ctx, s.logger, s.errColl)
s.logger.InfoContext(ctx, "validated categories", "num_categories", len(fls))
dfs, err = s.refreshDomainFilters(ctx, fls)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return nil, err
}
s.logger.InfoContext(ctx, "compiled categories", "num_categories", len(dfs))
return dfs, nil
}
// loadIndex fetches, decodes, and returns the filter list index data of the
// storage. resp.Filters are sorted.
func (s *Default) loadIndex(ctx context.Context, acceptStale bool) (resp *indexResp, err error) {
b, err := s.ruleListIdxRefr.Refresh(ctx, acceptStale)
if err != nil {
return nil, fmt.Errorf("loading index: %w", err)
@@ -96,6 +155,26 @@ func (s *Default) loadIndex(
return resp, nil
}
// loadCategoryIndex fetches, decodes, and returns the category filter list
// index data of the storage.
func (s *Default) loadCategoryIndex(
ctx context.Context,
acceptStale bool,
) (resp *categoryResp, err error) {
b, err := s.categoryDomainsIdxRefr.Refresh(ctx, acceptStale)
if err != nil {
return nil, fmt.Errorf("loading index: %w", err)
}
resp = &categoryResp{}
err = json.Unmarshal(b, resp)
if err != nil {
return nil, fmt.Errorf("decoding: %w", err)
}
return resp, nil
}
// refrResult is a result of refreshing a single rule list.
type refrResult struct {
// refr is a refreshable filter created from the provided *indexData. It
@@ -152,6 +231,19 @@ func (s *Default) resultRuleList(ctx context.Context, res refrResult) (rl *rulel
return res.refr
}
// prevRuleList returns the previous version of the filter, if there is one.
func (s *Default) prevRuleList(id filter.ID) (rl *rulelist.Refreshable) {
s.ruleListsMu.RLock()
defer s.ruleListsMu.RUnlock()
var ok bool
if rl, ok = s.ruleLists[id]; ok {
return rl
}
return nil
}
// refreshRuleList creates a [rulelist.Refreshable] from the data loaded with
// fl. It also adds the cache to the cache manager. It is intended to be
// used as a goroutine. fl must not be nil.
@@ -161,7 +253,17 @@ func (s *Default) refreshRuleList(
acceptStale bool,
resCh chan<- refrResult,
) {
defer recoverAndLog(ctx, s.logger, fl.id, resCh)
defer func() {
err := errors.FromRecovered(recover())
if err == nil {
return
}
s.logger.ErrorContext(ctx, "recovered panic", slogutil.KeyError, err)
slogutil.PrintStack(ctx, s.logger, slog.LevelError)
resCh <- refrResult{id: fl.id, err: err}
}()
fltID := fl.id
res := refrResult{
@@ -207,31 +309,139 @@ func (s *Default) refreshRuleList(
resCh <- res
}
// recoverAndLog is a deferred helper that recovers from a panic and logs the
// panic value with the given logger. Sends the recovered value into resCh.
func recoverAndLog(ctx context.Context, l *slog.Logger, id filter.ID, resCh chan<- refrResult) {
err := errors.FromRecovered(recover())
if err == nil {
// refrDomainFilterResult is a result of refreshing a single domain filter.
type refrDomainFilterResult struct {
// flt is a domain filter. It must not be nil if err is nil.
flt *domain.Filter
// err is a non-nil error if refreshing failed.
err error
// catID is the category ID of the domain filter.
catID filter.CategoryID
}
// refreshDomainFilters handles the given listsData. Returns a map of new
// initialized and refreshed domain filters.
func (s *Default) refreshDomainFilters(
ctx context.Context,
data []*categoryData,
) (dls domainFilters, err error) {
lenFls := len(data)
resCh := make(chan refrDomainFilterResult, lenFls)
for _, cat := range data {
go s.refreshDomainFilter(ctx, cat, resCh)
}
dls = make(domainFilters, lenFls)
for range lenFls {
select {
case <-ctx.Done():
return nil, ctx.Err()
case res := <-resCh:
dls[res.catID] = s.resultDomainFilter(ctx, res)
}
}
return dls, nil
}
// refreshDomainFilter creates, refreshes and returns a domain filter for the
// given category data. cat must not be nil.
func (s *Default) refreshDomainFilter(
ctx context.Context,
cat *categoryData,
resCh chan<- refrDomainFilterResult,
) {
defer func() {
err := errors.FromRecovered(recover())
if err == nil {
return
}
s.logger.ErrorContext(ctx, "recovered panic", slogutil.KeyError, err)
slogutil.PrintStack(ctx, s.logger, slog.LevelError)
resCh <- refrDomainFilterResult{catID: cat.id, err: err}
}()
res := refrDomainFilterResult{
catID: cat.id,
}
df, err := s.initDomainFilter(cat)
if err != nil {
res.err = fmt.Errorf("creating domain filter: %w", err)
resCh <- res
return
}
l.ErrorContext(ctx, "recovered panic", slogutil.KeyError, err)
slogutil.PrintStack(ctx, l, slog.LevelError)
err = df.Refresh(ctx)
if err != nil {
res.err = fmt.Errorf("refreshing domain filter: %w", err)
resCh <- res
resCh <- refrResult{
id: id,
err: err,
return
}
res.flt = df
resCh <- res
}
// prevRuleList returns the previous version of the filter, if there is one.
func (s *Default) prevRuleList(id filter.ID) (rl *rulelist.Refreshable) {
s.ruleListsMu.RLock()
defer s.ruleListsMu.RUnlock()
// initDomainFilter creates a domain filter for the given category data. cat
// must not be nil.
func (s *Default) initDomainFilter(cat *categoryData) (df *domain.Filter, err error) {
catID := cat.id
catIDStr := string(catID)
cacheID := path.Join(domain.IDPrefix, catIDStr)
return domain.NewFilter(&domain.FilterConfig{
Logger: s.baseLogger.With(slogutil.KeyPrefix, cacheID),
CacheManager: s.cacheManager,
URL: cat.url,
ErrColl: s.errColl,
DomainMetrics: s.domainMetrics,
Metrics: s.metrics,
PublicSuffixList: publicsuffix.List,
CategoryID: catID,
ResultListID: filter.IDCategory,
CachePath: filepath.Join(s.cacheDir, catIDStr),
Staleness: s.categoryDomainsStaleness,
RefreshTimeout: s.categoryDomainsRefreshTimeout,
CacheCount: s.categoryDomainsResCacheCount,
MaxSize: s.categoryDomainsMaxSize,
SubDomainNum: s.domainFilterSubDomainNum,
})
}
// resultDomainFilter returns a non-nil [domain.Filter] if res.err is nil.
// Otherwise, it returns the previous domain filter for the category and logs
// the error.
func (s *Default) resultDomainFilter(
ctx context.Context,
res refrDomainFilterResult,
) (rl *domain.Filter) {
catID := res.catID
if res.err != nil {
err := fmt.Errorf("initializing domain filter %q: %w", catID, res.err)
errcoll.Collect(ctx, s.errColl, s.logger, "domain filter error", err)
return s.prevDomainFilter(catID)
}
return res.flt
}
// prevDomainFilter returns the previous version of the filter, if there is one.
func (s *Default) prevDomainFilter(id filter.CategoryID) (df *domain.Filter) {
s.domainFiltersMu.RLock()
defer s.domainFiltersMu.RUnlock()
var ok bool
if rl, ok = s.ruleLists[id]; ok {
return rl
if df, ok = s.domainFilters[id]; ok {
return df
}
return nil
@@ -284,6 +494,14 @@ func (s *Default) resetRuleLists(rls ruleLists) {
s.ruleLists = rls
}
// resetDomainFilters replaces the storage's domain filters.
func (s *Default) resetDomainFilters(dfs domainFilters) {
s.domainFiltersMu.Lock()
defer s.domainFiltersMu.Unlock()
s.domainFilters = dfs
}
// RefreshInitial loads the content of the storage, using cached files if any,
// regardless of their staleness.
func (s *Default) RefreshInitial(ctx context.Context) (err error) {

View File

@@ -23,9 +23,10 @@ import (
func TestDefault_Refresh(t *testing.T) {
// TODO(a.garipov): Consider ways to DRY this code with [newDefault].
const (
blockRule = filtertest.RuleBlockStr + "\n"
ssGenRule = filtertest.RuleSafeSearchGeneralHostStr + "\n"
ssYTRule = filtertest.RuleSafeSearchYouTubeStr + "\n"
blockRule = filtertest.RuleBlockStr + "\n"
blockDomain = filtertest.HostBlocked + "\n"
ssGenRule = filtertest.RuleSafeSearchGeneralHostStr + "\n"
ssYTRule = filtertest.RuleSafeSearchYouTubeStr + "\n"
)
rlCh := make(chan unit, 1)
@@ -35,6 +36,13 @@ func TestDefault_Refresh(t *testing.T) {
rlIdxCh := make(chan unit, 1)
_, ruleListIdxURL := filtertest.PrepareRefreshable(t, rlIdxCh, string(rlIdxData), http.StatusOK)
catCh := make(chan unit, 1)
_, catURL := filtertest.PrepareRefreshable(t, catCh, blockDomain, http.StatusOK)
catIdxData := filtertest.NewCategoryIndex(catURL.String())
catIdxCh := make(chan unit, 1)
_, catIdxURL := filtertest.PrepareRefreshable(t, catIdxCh, string(catIdxData), http.StatusOK)
ssGenCh, ssYTCh := make(chan unit, 1), make(chan unit, 1)
_, safeSearchGenURL := filtertest.PrepareRefreshable(t, ssGenCh, ssGenRule, http.StatusOK)
_, safeSearchYTURL := filtertest.PrepareRefreshable(t, ssYTCh, ssYTRule, http.StatusOK)
@@ -47,7 +55,7 @@ func TestDefault_Refresh(t *testing.T) {
http.StatusOK,
)
c := newDisabledConfig(t, newConfigRuleLists(ruleListIdxURL))
c := newDisabledConfig(t, newIndexConfig(ruleListIdxURL), newIndexConfig(catIdxURL))
c.BlockedServices = newConfigBlockedServices(svcIdxURL)
c.SafeSearchGeneral = newConfigSafeSearch(safeSearchGenURL, filter.IDGeneralSafeSearch)
c.SafeSearchYouTube = newConfigSafeSearch(safeSearchYTURL, filter.IDYoutubeSafeSearch)
@@ -60,7 +68,9 @@ func TestDefault_Refresh(t *testing.T) {
require.NoError(t, err)
testutil.RequireReceive(t, rlCh, filtertest.Timeout)
testutil.RequireReceive(t, catCh, filtertest.Timeout)
testutil.RequireReceive(t, rlIdxCh, filtertest.Timeout)
testutil.RequireReceive(t, catIdxCh, filtertest.Timeout)
testutil.RequireReceive(t, ssGenCh, filtertest.Timeout)
testutil.RequireReceive(t, ssYTCh, filtertest.Timeout)
testutil.RequireReceive(t, svcIdxCh, filtertest.Timeout)
@@ -73,7 +83,9 @@ func TestDefault_Refresh(t *testing.T) {
// Make sure that the servers weren't called the second time.
require.Empty(t, rlCh)
require.Empty(t, catCh)
require.Empty(t, rlIdxCh)
require.Empty(t, catIdxCh)
require.Empty(t, ssGenCh)
require.Empty(t, ssYTCh)
require.Empty(t, svcIdxCh)
@@ -83,7 +95,8 @@ func TestDefault_Refresh(t *testing.T) {
func TestDefault_Refresh_usePrevious(t *testing.T) {
const (
blockRule = filtertest.RuleBlockStr + "\n"
blockRule = filtertest.RuleBlockStr + "\n"
blockDomain = filtertest.Host + "\n"
)
codeCh := make(chan int, 2)
@@ -92,21 +105,24 @@ func TestDefault_Refresh_usePrevious(t *testing.T) {
ruleListURL := newCodeServer(t, blockRule, codeCh)
rlIdxData := filtertest.NewRuleListIndex(ruleListURL.String())
_, ruleListIdxURL := filtertest.PrepareRefreshable(t, nil, string(rlIdxData), http.StatusOK)
_, rlIdxURL := filtertest.PrepareRefreshable(t, nil, string(rlIdxData), http.StatusOK)
_, catURL := filtertest.PrepareRefreshable(t, nil, blockDomain, http.StatusOK)
catIdxData := filtertest.NewCategoryIndex(catURL.String())
_, catIdxURL := filtertest.PrepareRefreshable(t, nil, string(catIdxData), http.StatusOK)
// Use a smaller staleness value to make sure that the filter is refreshed.
ruleListsConf := newConfigRuleLists(ruleListIdxURL)
ruleListsConf := newIndexConfig(rlIdxURL)
ruleListsConf.Staleness = 1 * time.Microsecond
c := newDisabledConfig(t, ruleListsConf)
c.RuleLists = ruleListsConf
c := newDisabledConfig(t, ruleListsConf, newIndexConfig(catIdxURL))
c.ErrColl = &agdtest.ErrorCollector{
OnCollect: func(_ context.Context, err error) {
errStatus := &agdhttp.StatusError{}
assert.ErrorAs(t, err, &errStatus)
assert.Equal(t, errStatus.Expected, http.StatusOK)
assert.Equal(t, errStatus.Got, http.StatusNotFound)
assert.Equal(t, errStatus.ServerName, filtertest.ServerName)
assert.Equal(t, http.StatusOK, errStatus.Expected)
assert.Equal(t, http.StatusNotFound, errStatus.Got)
assert.Equal(t, filtertest.ServerName, errStatus.ServerName)
},
}
@@ -120,8 +136,10 @@ func TestDefault_Refresh_usePrevious(t *testing.T) {
require.True(t, s.HasListID(filtertest.RuleListID1))
fltConf := &filter.ConfigClient{
Custom: &filter.ConfigCustom{},
Parental: &filter.ConfigParental{},
Custom: &filter.ConfigCustom{},
Parental: &filter.ConfigParental{
Categories: &filter.ConfigCategories{},
},
RuleList: &filter.ConfigRuleList{
IDs: []filter.ID{filtertest.RuleListID1},
Enabled: true,

View File

@@ -60,6 +60,10 @@ const (
// by the service blocker.
IDBlockedService ID = "blocked_service"
// IDCategory is the special shared filter ID used when a request was
// filtered by a category filter.
IDCategory ID = "category"
// IDCustom is the special shared filter ID used when a request was filtered
// by a custom profile rule.
IDCustom ID = "custom"

View File

@@ -66,6 +66,9 @@ type Config struct {
// Custom is the custom rule-list filter of the profile, if any.
Custom filter.Custom
// CategoryFilters are the enabled category request filters of the profile.
CategoryFilters []RequestFilter
// RuleLists are the enabled rule-list filters of the profile or filtering
// group, if any. All items must not be nil.
RuleLists []*rulelist.Refreshable
@@ -95,6 +98,10 @@ func New(c *Config) (f *Filter) {
f.reqFilters = appendIfNotNilUF(f.reqFilters, c.YouTubeSafeSearch, f.ufReq, f.ufRes)
f.reqFilters = appendIfNotNil(f.reqFilters, c.NewRegisteredDomains)
for _, df := range c.CategoryFilters {
f.reqFilters = appendIfNotNil(f.reqFilters, df)
}
return f
}

View File

@@ -511,6 +511,30 @@ func TestFilter_FilterRequest_services(t *testing.T) {
assert.Equal(t, want, res)
}
func TestFilter_FilterRequest_domainFilters(t *testing.T) {
t.Parallel()
const (
fltRespTTL = agdtest.FilteredResponseTTLSec
testDomain = filtertest.HostBlocked
)
domainFilter := filtertest.NewDomainFilter(t, testDomain)
f := newComposite(t, &composite.Config{
CategoryFilters: []composite.RequestFilter{domainFilter},
})
ctx, req := newReqData(t)
res, err := f.FilterRequest(ctx, req)
require.NoError(t, err)
want := &filter.ResultBlocked{
List: filter.IDCategory,
Rule: filter.RuleText(filtertest.CategoryIDStr),
}
assert.Equal(t, want, res)
}
func TestFilter_FilterResponse(t *testing.T) {
t.Parallel()

View File

@@ -34,9 +34,6 @@ type FilterConfig struct {
// Logger is used for logging the operation of the filter.
Logger *slog.Logger
// Cloner is used to clone messages taken from filtering-result cache.
Cloner *dnsmsg.Cloner
// CacheManager is the global cache manager. CacheManager must not be nil.
CacheManager agdcache.Manager
@@ -56,8 +53,13 @@ type FilterConfig struct {
// domain.
PublicSuffixList cookiejar.PublicSuffixList
// ID is the ID of this storage for logging and error reporting.
ID filter.ID
// CategoryID is the category identifier used for logging and error
// reporting.
CategoryID filter.CategoryID
// ResultListID is the identifier of the filter list used in the request
// filtering result.
ResultListID filter.ID
// CachePath is the path to the file containing the cached filtered
// hostnames, one per line.
@@ -91,7 +93,6 @@ type FilterConfig struct {
// TODO(f.setrakov): Consider DRYing it with the hasprefix filter.
type Filter struct {
logger *slog.Logger
cloner *dnsmsg.Cloner
domains *atomic.Pointer[container.MapSet[string]]
refr *refreshable.Refreshable
subDomainsPool *syncutil.Pool[[]string]
@@ -100,7 +101,8 @@ type Filter struct {
publicSuffixList cookiejar.PublicSuffixList
metrics filter.Metrics
resCache agdcache.Interface[rulelist.CacheKey, filter.Result]
id filter.ID
catID filter.CategoryID
resListID filter.ID
subDomainNum int
}
@@ -110,21 +112,19 @@ type Filter struct {
// TODO(a.garipov): Consider better names.
const IDPrefix = "filters/domain"
// NewFilter returns a new domain filter. It also adds the caches with IDs
// [FilterListIDAdultBlocking], [FilterListIDSafeBrowsing], and
// [FilterListIDNewRegDomains] to the cache manager. c must not be nil.
// NewFilter returns a new domain filter. It adds the cache to the cache
// manager specified in c. c must not be nil.
func NewFilter(c *FilterConfig) (f *Filter, err error) {
id := c.ID
catID := c.CategoryID
resCache := agdcache.NewLRU[rulelist.CacheKey, filter.Result](&agdcache.LRUConfig{
Count: c.CacheCount,
})
c.CacheManager.Add(path.Join(IDPrefix, string(id)), resCache)
c.CacheManager.Add(path.Join(IDPrefix, string(catID)), resCache)
f = &Filter{
logger: c.Logger,
cloner: c.Cloner,
domains: &atomic.Pointer[container.MapSet[string]]{},
errColl: c.ErrColl,
// #nosec G115 -- Assume that c.SubDomainNum is always less then or
@@ -136,7 +136,8 @@ func NewFilter(c *FilterConfig) (f *Filter, err error) {
metrics: c.Metrics,
publicSuffixList: c.PublicSuffixList,
resCache: resCache,
id: id,
catID: catID,
resListID: c.ResultListID,
// #nosec G115 -- The value is a constant less than int accommodates.
subDomainNum: int(c.SubDomainNum),
}
@@ -144,7 +145,7 @@ func NewFilter(c *FilterConfig) (f *Filter, err error) {
f.refr, err = refreshable.New(&refreshable.Config{
Logger: f.logger,
URL: c.URL,
ID: id,
ID: filter.ID(catID),
CachePath: c.CachePath,
Staleness: c.Staleness,
Timeout: c.RefreshTimeout,
@@ -158,7 +159,7 @@ func NewFilter(c *FilterConfig) (f *Filter, err error) {
}
// FilterRequest implements the [composite.RequestFilter] interface for *Filter.
// It modifies the request or response if host matches f.
// It blocks the request if host matches f.
func (f *Filter) FilterRequest(
ctx context.Context,
req *filter.Request,
@@ -167,9 +168,9 @@ func (f *Filter) FilterRequest(
cacheKey := rulelist.NewCacheKey(host, qt, cl, false)
item, ok := f.resCache.Get(cacheKey)
f.domainMtrc.IncrementLookups(ctx, ok)
f.domainMtrc.IncrementLookups(ctx, f.catID, ok)
if ok {
return f.clonedResult(req.DNS, item), nil
return item, nil
}
if !isFilterable(qt) {
@@ -197,15 +198,14 @@ func (f *Filter) FilterRequest(
return nil, nil
}
r, err = f.filteredResult(req, matched)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return nil, err
r = &filter.ResultBlocked{
List: f.resListID,
Rule: filter.RuleText(f.catID),
}
f.setInCache(cacheKey, r)
f.resCache.Set(cacheKey, r)
f.domainMtrc.UpdateCacheSize(ctx, f.resCache.Len())
f.domainMtrc.UpdateCacheSize(ctx, f.catID, f.resCache.Len())
return r, nil
}
@@ -217,55 +217,6 @@ func isFilterable(qt dnsmsg.RRType) (ok bool) {
return qt == dns.TypeHTTPS || fam != netutil.AddrFamilyNone
}
// clonedResult returns a clone of the result based on its type. r must be nil,
// [*filter.ResultModifiedRequest], or [*filter.ResultModifiedResponse].
func (f *Filter) clonedResult(req *dns.Msg, r filter.Result) (clone filter.Result) {
switch r := r.(type) {
case nil:
return nil
case *filter.ResultModifiedRequest:
return r.Clone(f.cloner)
case *filter.ResultModifiedResponse:
return r.CloneForReq(f.cloner, req)
default:
panic(fmt.Errorf("domain: unexpected type for result: %T(%[1]v)", r))
}
}
// filteredResult returns a filtered request or response.
func (f *Filter) filteredResult(
req *filter.Request,
matched string,
) (r filter.Result, err error) {
resp, err := req.Messages.NewBlockedResp(req.DNS, nil)
if err != nil {
return nil, fmt.Errorf("filter %s: creating modified result: %w", f.id, err)
}
return &filter.ResultModifiedResponse{
Msg: resp,
List: f.id,
Rule: filter.RuleText(matched),
}, nil
}
// setInCache sets r in cache. It clones the result to make sure that
// modifications to the result message down the pipeline don't interfere with
// the cached value. r must be either [*filter.ResultModifiedRequest] or
// [*filter.ResultModifiedResponse].
//
// See AGDNS-359.
func (f *Filter) setInCache(k rulelist.CacheKey, r filter.Result) {
switch r := r.(type) {
case *filter.ResultModifiedRequest:
f.resCache.Set(k, r.Clone(f.cloner))
case *filter.ResultModifiedResponse:
f.resCache.Set(k, r.Clone(f.cloner))
default:
panic(fmt.Errorf("domain: unexpected type for result: %T(%[1]v)", r))
}
}
// type check
var _ service.Refresher = (*Filter)(nil)
@@ -276,7 +227,7 @@ func (f *Filter) Refresh(ctx context.Context) (err error) {
err = f.refresh(ctx, false)
if err != nil {
errcoll.Collect(ctx, f.errColl, f.logger, fmt.Sprintf("refreshing %q", f.id), err)
errcoll.Collect(ctx, f.errColl, f.logger, fmt.Sprintf("refreshing %q", f.catID), err)
}
return err
@@ -303,7 +254,8 @@ func (f *Filter) refresh(ctx context.Context, acceptStale bool) (err error) {
var count int
defer func() {
// TODO(a.garipov): Consider using [agdtime.Clock].
f.metrics.SetFilterStatus(ctx, string(f.id), time.Now(), count, err)
// TODO(a.garipov): Consider using a prefix or a label for categories.
f.metrics.SetFilterStatus(ctx, string(f.catID), time.Now(), count, err)
}()
b, err := f.refr.Refresh(ctx, acceptStale)
@@ -314,7 +266,7 @@ func (f *Filter) refresh(ctx context.Context, acceptStale bool) (err error) {
count, err = f.resetDomains(b)
if err != nil {
return fmt.Errorf("%s: resetting: %w", f.id, err)
return fmt.Errorf("%s: resetting: %w", f.catID, err)
}
f.resCache.Clear()

View File

@@ -2,7 +2,6 @@ package domain_test
import (
"net/http"
"net/netip"
"os"
"testing"
"time"
@@ -31,6 +30,12 @@ var _ composite.RequestFilter = (*domain.Filter)(nil)
// testDomains is the host data for tests.
const testDomains = filtertest.HostCategory + "\n"
// testResult is the common blocked result for tests.
var testResult = &filter.ResultBlocked{
List: filter.IDCategory,
Rule: filter.RuleText(filtertest.RuleListIDDomain),
}
func TestFilter_FilterRequest(t *testing.T) {
t.Parallel()
@@ -80,18 +85,9 @@ func TestFilter_FilterRequest(t *testing.T) {
})
require.NoError(t, err)
var wantRes filter.Result
if tc.wantResult {
wantRes = newModRespResult(
t,
req,
msgs,
netip.IPv4Unspecified(),
filtertest.HostCategory,
)
filtertest.AssertEqualResult(t, testResult, r)
}
filtertest.AssertEqualResult(t, wantRes, r)
})
}
}
@@ -140,14 +136,14 @@ func TestFilter_Refresh(t *testing.T) {
f, err := domain.NewFilter(&domain.FilterConfig{
Logger: slogutil.NewDiscardLogger(),
Cloner: agdtest.NewCloner(),
CacheManager: agdcache.EmptyManager{},
URL: srvURL,
ErrColl: agdtest.NewErrorCollector(),
DomainMetrics: domain.EmptyMetrics{},
Metrics: filter.EmptyMetrics{},
PublicSuffixList: publicsuffix.List,
ID: filtertest.RuleListIDDomain,
CategoryID: filtertest.CategoryID,
ResultListID: filter.IDCategory,
CachePath: cachePath,
Staleness: filtertest.Staleness,
CacheTTL: filtertest.CacheTTL,
@@ -175,8 +171,6 @@ func TestFilter_FilterRequest_staleCache(t *testing.T) {
refrCh := make(chan struct{}, 1)
cachePath, srvURL := filtertest.PrepareRefreshable(t, refrCh, testDomains, http.StatusOK)
msgs := agdtest.NewConstructor(t)
// Put some initial data into the cache to avoid the first refresh.
cf, err := os.OpenFile(cachePath, os.O_WRONLY|os.O_APPEND, os.ModeAppend)
@@ -188,18 +182,16 @@ func TestFilter_FilterRequest_staleCache(t *testing.T) {
// Create the filter.
cloner := agdtest.NewCloner()
fconf := &domain.FilterConfig{
Logger: slogutil.NewDiscardLogger(),
Cloner: cloner,
CacheManager: agdcache.EmptyManager{},
URL: srvURL,
ErrColl: agdtest.NewErrorCollector(),
DomainMetrics: domain.EmptyMetrics{},
Metrics: filter.EmptyMetrics{},
PublicSuffixList: publicsuffix.List,
ID: filtertest.RuleListIDDomain,
CategoryID: filtertest.CategoryID,
ResultListID: filter.IDCategory,
CachePath: cachePath,
Staleness: filtertest.Staleness,
CacheTTL: filtertest.CacheTTL,
@@ -232,14 +224,7 @@ func TestFilter_FilterRequest_staleCache(t *testing.T) {
r, err = f.FilterRequest(ctx, otherHostReq)
require.NoError(t, err)
wantRes := newModRespResult(
t,
otherHostReq.DNS,
msgs,
netip.IPv4Unspecified(),
filtertest.Host,
)
filtertest.AssertEqualResult(t, wantRes, r)
filtertest.AssertEqualResult(t, testResult, r)
}))
require.True(t, t.Run("refresh", func(t *testing.T) {
@@ -273,34 +258,6 @@ func TestFilter_FilterRequest_staleCache(t *testing.T) {
r, err = f.FilterRequest(ctx, hostReq)
require.NoError(t, err)
wantRes := newModRespResult(
t,
hostReq.DNS,
msgs,
netip.IPv4Unspecified(),
filtertest.HostCategory,
)
filtertest.AssertEqualResult(t, wantRes, r)
filtertest.AssertEqualResult(t, testResult, r)
}))
}
// newModRespResult is a helper for creating modified response result for tests.
// req must not be nil.
func newModRespResult(
tb testing.TB,
req *dns.Msg,
messages *dnsmsg.Constructor,
replIP netip.Addr,
rule string,
) (r *filter.ResultModifiedResponse) {
tb.Helper()
resp, err := messages.NewRespIP(req, replIP)
require.NoError(tb, err)
return &filter.ResultModifiedResponse{
Msg: resp,
List: filtertest.RuleListIDDomain,
Rule: filter.RuleText(rule),
}
}

View File

@@ -2,17 +2,20 @@ package domain
import (
"context"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
)
// Metrics is an interface used for collection if the domain filter
// statistics.
type Metrics interface {
// IncrementLookups increments the number of lookups. hit is true if the
// lookup returned a value.
IncrementLookups(ctx context.Context, hit bool)
// lookup returned a value. categoryID must be a valid label name.
IncrementLookups(ctx context.Context, categoryID filter.CategoryID, hit bool)
// UpdateCacheSize is called when the cache size is updated.
UpdateCacheSize(ctx context.Context, cacheLen int)
// UpdateCacheSize is called when the cache size is updated. categoryID
// must be a valid label name.
UpdateCacheSize(ctx context.Context, categoryID filter.CategoryID, cacheLen int)
}
// EmptyMetrics is the implementation of the [Metrics] interface that does nothing.
@@ -22,7 +25,7 @@ type EmptyMetrics struct{}
var _ Metrics = EmptyMetrics{}
// IncrementLookups implements the [Metrics] interface for EmptyMetrics.
func (EmptyMetrics) IncrementLookups(_ context.Context, _ bool) {}
func (EmptyMetrics) IncrementLookups(_ context.Context, _ filter.CategoryID, _ bool) {}
// UpdateCacheSize implements the [Metrics] interface for EmptyMetrics.
func (EmptyMetrics) UpdateCacheSize(_ context.Context, _ int) {}
func (EmptyMetrics) UpdateCacheSize(_ context.Context, _ filter.CategoryID, _ int) {}

View File

@@ -14,21 +14,22 @@ import (
"golang.org/x/net/publicsuffix"
)
// NewDomainFilter is a helper constructor of domain filters for tests.
// NewDomainFilter is a helper constructor of domain filters for tests. Sets
// the filter category ID to [CategoryID].
func NewDomainFilter(tb testing.TB, data string) (f *domain.Filter) {
tb.Helper()
cachePath, srvURL := PrepareRefreshable(tb, nil, data, http.StatusOK)
f, err := domain.NewFilter(&domain.FilterConfig{
Logger: slogutil.NewDiscardLogger(),
Cloner: agdtest.NewCloner(),
CacheManager: agdcache.EmptyManager{},
URL: srvURL,
ErrColl: agdtest.NewErrorCollector(),
DomainMetrics: domain.EmptyMetrics{},
Metrics: filter.EmptyMetrics{},
PublicSuffixList: publicsuffix.List,
ID: RuleListIDDomain,
CategoryID: CategoryID,
ResultListID: filter.IDCategory,
CachePath: cachePath,
Staleness: Staleness,
CacheTTL: CacheTTL,

View File

@@ -132,6 +132,9 @@ const (
RuleListID1 filter.ID = RuleListID1Str
RuleListID2 filter.ID = RuleListID2Str
RuleListIDDomain filter.ID = RuleListIDDomainStr
CategoryIDStr = RuleListIDDomainStr
CategoryID filter.CategoryID = CategoryIDStr
)
// NewRuleListIndex returns a rule-list index containing a record for a filter
@@ -145,6 +148,18 @@ func NewRuleListIndex(downloadURL string) (b []byte) {
}))
}
// NewCategoryIndex returns a category rule-list index containing a filter for
// [CategoryIDStr] and downloadURL as the download URL.
func NewCategoryIndex(downloadURL string) (b []byte) {
return errors.Must(json.Marshal(map[string]any{
"filters": map[string]any{
CategoryIDStr: map[string]any{
"downloadUrl": downloadURL,
},
},
}))
}
// CacheTTL is the common long cache-TTL for filtering tests.
const CacheTTL = 1 * time.Hour

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"github.com/AdguardTeam/AdGuardDNS/internal/filter"
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
"github.com/prometheus/client_golang/prometheus"
@@ -14,51 +15,35 @@ import (
type DomainFilter struct {
// cacheSize is a gauge with the total count of records in the DomainStorage
// cache.
cacheSize prometheus.Gauge
cacheSize *prometheus.GaugeVec
// hits is a counter of the total number of lookups to the DomainStorage
// cache that succeeded.
hits prometheus.Counter
// misses is a counter of the total number of lookups to the DomainStorage
// cache that resulted in a miss.
misses prometheus.Counter
// lookups is a counter of the total number of lookups to the DomainStorage
// cache.
lookups *prometheus.CounterVec
}
// NewDomainFilter registers the filtering metrics in reg and returns a
// properly initialized *DomainFilter. filterName must be a valid label
// name.
func NewDomainFilter(
namespace string,
filterName string,
reg prometheus.Registerer,
) (m *DomainFilter, err error) {
// NewDomainFilter registers the filtering metrics in reg and returns a properly
// initialized *DomainFilter.
func NewDomainFilter(namespace string, reg prometheus.Registerer) (m *DomainFilter, err error) {
const (
cacheLookups = "domain_filter_cache_lookups"
cacheSize = "domain_filter_cache_size"
)
labels := prometheus.Labels{"filter": filterName}
lookups := prometheus.NewCounterVec(prometheus.CounterOpts{
Name: cacheLookups,
Subsystem: subsystemFilter,
Namespace: namespace,
Help: "Total number of lookups to DomainFilter host cache lookups. " +
"Label hit is the lookup result, either 1 for hit or 0 for miss.",
ConstLabels: labels,
}, []string{"hit"})
m = &DomainFilter{
cacheSize: prometheus.NewGauge(prometheus.GaugeOpts{
Name: cacheSize,
Subsystem: subsystemFilter,
Namespace: namespace,
Help: "The total number of items in the DomainFilter cache.",
ConstLabels: labels,
}),
hits: lookups.WithLabelValues("1"),
misses: lookups.WithLabelValues("0"),
cacheSize: prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: cacheSize,
Subsystem: subsystemFilter,
Namespace: namespace,
Help: "The total number of items in the DomainFilter cache.",
}, []string{"category"}),
lookups: prometheus.NewCounterVec(prometheus.CounterOpts{
Name: cacheLookups,
Subsystem: subsystemFilter,
Namespace: namespace,
Help: "Total number of lookups to DomainFilter host cache lookups. " +
"Label hit is the lookup result, either 1 for hit or 0 for miss.",
}, []string{"category", "hit"}),
}
var errs []error
@@ -67,7 +52,7 @@ func NewDomainFilter(
Value: m.cacheSize,
}, {
Key: cacheLookups,
Value: lookups,
Value: m.lookups,
}}
for _, c := range collectors {
@@ -84,14 +69,18 @@ func NewDomainFilter(
return m, nil
}
// IncrementLookups implements the [domain.Metrics] interface for
// *DomainFilter.
func (m *DomainFilter) IncrementLookups(_ context.Context, hit bool) {
IncrementCond(hit, m.hits, m.misses)
// IncrementLookups implements the [domain.Metrics] interface for *DomainFilter.
func (m *DomainFilter) IncrementLookups(_ context.Context, categoryID filter.CategoryID, hit bool) {
catIDStr := string(categoryID)
if hit {
m.lookups.WithLabelValues(catIDStr, "1").Inc()
} else {
m.lookups.WithLabelValues(catIDStr, "0").Inc()
}
}
// UpdateCacheSize implements the [domain.Metrics] interface for
// *DomainFilter.
func (m *DomainFilter) UpdateCacheSize(_ context.Context, size int) {
m.cacheSize.Set(float64(size))
// UpdateCacheSize implements the [domain.Metrics] interface for *DomainFilter.
func (m *DomainFilter) UpdateCacheSize(_ context.Context, categoryID filter.CategoryID, size int) {
m.cacheSize.WithLabelValues(string(categoryID)).Set(float64(size))
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
"github.com/c2h5oh/datasize"
"github.com/prometheus/client_golang/prometheus"
)
@@ -33,6 +34,18 @@ type ProfileDB struct {
// during the last sync.
devicesNewCount prometheus.Gauge
// fileCacheSize is a gauge with the size of the last successfully
// synchronized cache file.
fileCacheSize prometheus.Gauge
// fileCacheSyncTime is a gauge with the timestamp of the last successful
// cache file synchronization.
fileCacheSyncTime prometheus.Gauge
// fileCacheStoreDuration is a histogram with the duration of storing the
// file cache to disk.
fileCacheStoreDuration prometheus.Histogram
// profilesCount is a gauge with the total number of user profiles loaded
// from the backend.
profilesCount prometheus.Gauge
@@ -77,6 +90,9 @@ func NewProfileDB(namespace string, reg prometheus.Registerer) (m *ProfileDB, er
const (
devicesCount = "devices_total"
devicesNewCount = "devices_newly_synced_total"
fileCacheSize = "file_cache_size_bytes"
fileCacheStoreDuration = "file_cache_store_duration_seconds"
fileCacheSyncTime = "file_cache_sync_timestamp"
profilesCount = "profiles_total"
profilesNewCount = "profiles_newly_synced_total"
profilesDeletedTotal = "profiles_deleted_total"
@@ -110,6 +126,25 @@ func NewProfileDB(namespace string, reg prometheus.Registerer) (m *ProfileDB, er
Help: "The number of user devices that were changed or added since " +
"the previous sync.",
}),
fileCacheSize: prometheus.NewGauge(prometheus.GaugeOpts{
Name: fileCacheSize,
Subsystem: subsystemBackend,
Namespace: namespace,
Help: "The size of the last successfully synchronized cache file.",
}),
fileCacheStoreDuration: prometheus.NewHistogram(prometheus.HistogramOpts{
Name: fileCacheStoreDuration,
Subsystem: subsystemBackend,
Namespace: namespace,
Help: "Time elapsed on storing file cache to disk, in seconds.",
Buckets: []float64{0.001, 0.01, 0.1, 0.5, 1, 2, 5},
}),
fileCacheSyncTime: prometheus.NewGauge(prometheus.GaugeOpts{
Name: fileCacheSyncTime,
Subsystem: subsystemBackend,
Namespace: namespace,
Help: "The time when the file cache was synced last time.",
}),
profilesCount: prometheus.NewGauge(prometheus.GaugeOpts{
Name: profilesCount,
Subsystem: subsystemBackend,
@@ -165,6 +200,15 @@ func NewProfileDB(namespace string, reg prometheus.Registerer) (m *ProfileDB, er
}
collectors := container.KeyValues[string, prometheus.Collector]{{
Key: fileCacheSize,
Value: m.fileCacheSize,
}, {
Key: fileCacheStoreDuration,
Value: m.fileCacheStoreDuration,
}, {
Key: fileCacheSyncTime,
Value: m.fileCacheSyncTime,
}, {
Key: devicesCount,
Value: m.devicesCount,
}, {
@@ -253,6 +297,22 @@ func (m *ProfileDB) IncrementDeleted(_ context.Context) {
m.profilesDeletedTotal.Inc()
}
// SetLastFileCacheSyncTime implements the [profiledb.Metrics] interface for
// *ProfileDB.
func (m *ProfileDB) SetLastFileCacheSyncTime(_ context.Context, t time.Time) {
m.fileCacheSyncTime.Set(float64(t.Unix()))
}
// SetFileCacheSize implements the [profiledb.Metrics] interface for *ProfileDB.
func (m *ProfileDB) SetFileCacheSize(_ context.Context, size datasize.ByteSize) {
m.fileCacheSize.Set(float64(size))
}
// ObserveFileCacheStoreDuration records the duration of storing file cache to disk.
func (m *ProfileDB) ObserveFileCacheStoreDuration(_ context.Context, d time.Duration) {
m.fileCacheStoreDuration.Observe(d.Seconds())
}
// BackendProfileDB is the Prometheus-based implementation of the
// [backendpb.ProfileDBMetrics] interface.
type BackendProfileDB struct {

View File

@@ -49,6 +49,10 @@ type Config struct {
// the string "none", filesystem cache is disabled. It must not be empty.
CacheFilePath string
// CacheFileIvl is the interval between updates of the profile cache file.
// It must be positive if filesystem cache is enabled, see CacheFilePath.
CacheFileIvl time.Duration
// FullSyncIvl is the interval between two full synchronizations with the
// storage. It must be positive.
FullSyncIvl time.Duration

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"log/slog"
"maps"
"net/netip"
"path/filepath"
"slices"
@@ -85,6 +86,10 @@ type Default struct {
// requests to the storage, unless it's a full synchronization.
syncTime time.Time
// lastCacheSync is the time of the last successful cache file
// synchronization.
lastCacheSync time.Time
// lastFullSync is the time of the last successful full synchronization.
lastFullSync time.Time
@@ -93,6 +98,9 @@ type Default struct {
// field is time.Time{}.
lastFullSyncError time.Time
// cacheIvl is the interval between two cache file synchronizations.
cacheIvl time.Duration
// fullSyncIvl is the interval between two full synchronizations with the
// storage.
fullSyncIvl time.Duration
@@ -150,6 +158,7 @@ func New(c *Config) (db *Default, err error) {
dedicatedIPToDeviceID: make(map[netip.Addr]agd.DeviceID),
humanIDToDeviceID: make(map[humanIDKey]agd.DeviceID),
linkedIPToDeviceID: make(map[netip.Addr]agd.DeviceID),
cacheIvl: c.CacheFileIvl,
fullSyncIvl: c.FullSyncIvl,
fullSyncRetryIvl: c.FullSyncRetryIvl,
}
@@ -197,7 +206,7 @@ func (db *Default) Refresh(ctx context.Context) (err error) {
db.metrics.HandleProfilesUpdate(ctx, &UpdateMetrics{
ProfilesNum: profNum,
DevicesNum: devNum,
Duration: time.Since(startTime),
Duration: db.clock.Now().Sub(startTime),
IsSuccess: err == nil,
IsFullSync: false,
})
@@ -216,6 +225,10 @@ func (db *Default) Refresh(ctx context.Context) (err error) {
db.setProfiles(ctx, profiles, devices, resp.DeviceChanges)
if db.clock.Now().Sub(db.lastCacheSync) >= db.cacheIvl {
return db.storeCache(ctx)
}
return nil
}
@@ -273,10 +286,7 @@ func (db *Default) applyChanges(p *agd.Profile, devChg *StorageDeviceChange) {
return
}
// TODO(a.garipov): Consider adding container.MapSet.Union.
for prevID := range prev.DeviceIDs.Range {
p.DeviceIDs.Add(prevID)
}
p.DeviceIDs = p.DeviceIDs.Union(p.DeviceIDs, prev.DeviceIDs)
for _, delDevID := range devChg.DeletedDeviceIDs {
p.DeviceIDs.Delete(delDevID)
@@ -374,7 +384,7 @@ func (db *Default) refreshFull(ctx context.Context) (err error) {
db.metrics.HandleProfilesUpdate(ctx, &UpdateMetrics{
ProfilesNum: profNum,
DevicesNum: devNum,
Duration: time.Since(startTime),
Duration: db.clock.Now().Sub(startTime),
IsSuccess: err == nil,
IsFullSync: true,
})
@@ -398,16 +408,31 @@ func (db *Default) refreshFull(ctx context.Context) (err error) {
db.lastFullSync = db.clock.Now()
db.lastFullSyncError = time.Time{}
err = db.cache.Store(ctx, &internal.FileCache{
return db.storeCache(ctx)
}
// storeCache stores the profiles and devices in db to the cache file.
func (db *Default) storeCache(ctx context.Context) (err error) {
db.mapsMu.RLock()
defer db.mapsMu.RUnlock()
start := db.clock.Now()
n, err := db.cache.Store(ctx, &internal.FileCache{
SyncTime: db.syncTime,
Profiles: profiles,
Devices: devices,
Profiles: slices.Collect(maps.Values(db.profiles)),
Devices: slices.Collect(maps.Values(db.devices)),
Version: internal.FileCacheVersion,
})
if err != nil {
return fmt.Errorf("saving cache: %w", err)
}
db.lastCacheSync = db.clock.Now()
db.metrics.SetLastFileCacheSyncTime(ctx, db.lastCacheSync)
db.metrics.SetFileCacheSize(ctx, n)
db.metrics.ObserveFileCacheStoreDuration(ctx, db.clock.Now().Sub(start))
return nil
}
@@ -461,7 +486,9 @@ func (db *Default) fetchProfiles(
// be locked.
func (db *Default) needsFullSync(ctx context.Context) (isFull bool) {
lastFull := db.lastFullSync
sinceFull := time.Since(lastFull)
now := db.clock.Now()
sinceFull := now.Sub(lastFull)
if db.lastFullSyncError.IsZero() {
return sinceFull >= db.fullSyncIvl
@@ -474,7 +501,7 @@ func (db *Default) needsFullSync(ctx context.Context) (isFull bool) {
"last_successful_sync_time", lastFull,
)
sinceLastError := time.Since(db.lastFullSyncError)
sinceLastError := now.Sub(db.lastFullSyncError)
return sinceLastError >= db.fullSyncRetryIvl
}
@@ -483,10 +510,10 @@ func (db *Default) needsFullSync(ctx context.Context) (isFull bool) {
// attempt. db.refreshMu must be locked.
func (db *Default) sinceLastFull() (sinceFull time.Duration) {
if !db.lastFullSyncError.IsZero() {
return time.Since(db.lastFullSyncError)
return db.clock.Now().Sub(db.lastFullSyncError)
}
return time.Since(db.lastFullSync)
return db.clock.Now().Sub(db.lastFullSync)
}
// loadFileCache loads the profiles data from the filesystem cache.
@@ -519,7 +546,7 @@ func (db *Default) loadFileCache(ctx context.Context) (err error) {
"version", c.Version,
"prof_num", profNum,
"dev_num", devNum,
"elapsed", time.Since(start),
"elapsed", db.clock.Now().Sub(start),
)
if profNum == 0 || devNum == 0 {
@@ -529,7 +556,7 @@ func (db *Default) loadFileCache(ctx context.Context) (err error) {
}
db.setProfilesFull(ctx, c.Profiles, c.Devices)
db.syncTime, db.lastFullSync = c.SyncTime, c.SyncTime
db.syncTime, db.lastFullSync, db.lastCacheSync = c.SyncTime, c.SyncTime, c.SyncTime
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"net/netip"
"path/filepath"
"slices"
"testing"
"time"
@@ -16,6 +17,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/profiledbtest"
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/testutil/faketime"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -355,13 +357,14 @@ func TestDefault_fileCache_success(t *testing.T) {
})
ctx := profiledbtest.ContextWithTimeout(t)
err := pbCache.Store(ctx, &internal.FileCache{
n, err := pbCache.Store(ctx, &internal.FileCache{
SyncTime: wantSyncTime,
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev},
Version: internal.FileCacheVersion,
})
require.NoError(t, err)
assert.Positive(t, n)
db := newProfileDB(t, &profiledb.Config{
Storage: ps,
@@ -404,10 +407,11 @@ func TestDefault_fileCache_badVersion(t *testing.T) {
})
ctx := profiledbtest.ContextWithTimeout(t)
err := pbCache.Store(ctx, &internal.FileCache{
n, err := pbCache.Store(ctx, &internal.FileCache{
Version: 10000,
})
require.NoError(t, err)
assert.Positive(t, n)
db := newProfileDB(t, &profiledb.Config{
Storage: ps,
@@ -420,6 +424,150 @@ func TestDefault_fileCache_badVersion(t *testing.T) {
assert.True(t, storageCalled)
}
func TestDefault_fileCache_refresh(t *testing.T) {
t.Parallel()
prof, dev0 := profiledbtest.NewProfile(t)
dev1 := profiledbtest.NewDevice(t, "dev1", "dev1")
dev2 := profiledbtest.NewDevice(t, "dev2", "dev2")
dev3 := profiledbtest.NewDevice(t, "dev3", "dev3")
// Use the time with monotonic clocks stripped down.
dbSyncTime := time.Now().Round(0).UTC()
ps := agdtest.NewProfileStorage()
ps.OnProfiles = func(
_ context.Context,
req *profiledb.StorageProfilesRequest,
) (resp *profiledb.StorageProfilesResponse, err error) {
return &profiledb.StorageProfilesResponse{
SyncTime: dbSyncTime,
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev1},
}, nil
}
cacheFilePath := filepath.Join(t.TempDir(), "profiles.pb")
pbCache := filecachepb.New(&filecachepb.Config{
Logger: profiledbtest.Logger,
BaseCustomLogger: profiledbtest.Logger,
ProfileAccessConstructor: profiledbtest.ProfileAccessConstructor,
CacheFilePath: cacheFilePath,
ResponseSizeEstimate: profiledbtest.RespSzEst,
})
ctx := profiledbtest.ContextWithTimeout(t)
n, err := pbCache.Store(ctx, &internal.FileCache{
SyncTime: dbSyncTime,
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev0},
Version: internal.FileCacheVersion,
})
require.NoError(t, err)
assert.Positive(t, n)
cacheFileIvl := time.Minute
testTimeNow := dbSyncTime
db := newProfileDB(t, &profiledb.Config{
Clock: &faketime.Clock{OnNow: func() (now time.Time) {
return testTimeNow
}},
Storage: ps,
CacheFilePath: cacheFilePath,
CacheFileIvl: cacheFileIvl,
FullSyncIvl: cacheFileIvl * 2,
})
// Check file cache content after init.
assertCacheData(t, pbCache, dbSyncTime, prof, dev0)
// Set time to next full sync interval.
testTimeNow = testTimeNow.Add(cacheFileIvl * 2)
// Full sync refresh with cache file update.
require.NoError(t, db.Refresh(ctx))
assertCacheData(t, pbCache, dbSyncTime, prof, dev1)
// New device dev2 is added.
newSyncTime := dbSyncTime.Add(time.Second)
ps.OnProfiles = func(
_ context.Context,
req *profiledb.StorageProfilesRequest,
) (resp *profiledb.StorageProfilesResponse, err error) {
return &profiledb.StorageProfilesResponse{
SyncTime: newSyncTime,
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev1, dev2},
}, nil
}
// Partial sync refresh without file cache update.
require.NoError(t, db.Refresh(ctx))
assertCacheData(t, pbCache, dbSyncTime, prof, dev1)
// New device dev3 is added.
dbSyncTime = dbSyncTime.Add(time.Second)
ps.OnProfiles = func(
_ context.Context,
req *profiledb.StorageProfilesRequest,
) (resp *profiledb.StorageProfilesResponse, err error) {
return &profiledb.StorageProfilesResponse{
SyncTime: dbSyncTime,
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev1, dev2, dev3},
}, nil
}
// Set time to next cache file interval.
testTimeNow = testTimeNow.Add(cacheFileIvl)
// Partial sync refresh with file cache update.
require.NoError(t, db.Refresh(ctx))
assertCacheData(t, pbCache, dbSyncTime, prof, dev1, dev2, dev3)
}
// assertCacheData checks that the file cache contains the expected profile and
// devices.
func assertCacheData(
tb testing.TB,
pbCache internal.FileCacheStorage,
dbSyncTime time.Time,
prof *agd.Profile,
devs ...*agd.Device,
) {
tb.Helper()
ctx := profiledbtest.ContextWithTimeout(tb)
gotCache, err := pbCache.Load(ctx)
require.NoError(tb, err)
assert.Equal(tb, dbSyncTime, gotCache.SyncTime)
require.Len(tb, gotCache.Profiles, 1)
assert.Equal(tb, prof.ID, gotCache.Profiles[0].ID)
require.Len(tb, gotCache.Devices, len(devs))
var gotDevIDs []agd.DeviceID
for _, d := range gotCache.Devices {
gotDevIDs = append(gotDevIDs, d.ID)
}
var wantDevIDs []agd.DeviceID
for _, d := range devs {
wantDevIDs = append(wantDevIDs, d.ID)
}
slices.Sort(gotDevIDs)
slices.Sort(wantDevIDs)
assert.Equal(tb, wantDevIDs, gotDevIDs)
}
func TestDefault_CreateAutoDevice(t *testing.T) {
t.Parallel()

View File

@@ -1,8 +1,13 @@
// NOTE: Keep in sync with the filecachepb/filecache.proto
//
// TODO(f.setrakov): Remove filecachepb/filecache.proto when migration to opaque
// api will be finished.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc v6.33.0
// source: filecache.proto
// protoc v6.33.1
// source: fc.proto
package fcpb
@@ -34,7 +39,7 @@ type FileCache struct {
func (x *FileCache) Reset() {
*x = FileCache{}
mi := &file_filecache_proto_msgTypes[0]
mi := &file_fc_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -46,7 +51,7 @@ func (x *FileCache) String() string {
func (*FileCache) ProtoMessage() {}
func (x *FileCache) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[0]
mi := &file_fc_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -163,7 +168,7 @@ type Profile struct {
func (x *Profile) Reset() {
*x = Profile{}
mi := &file_filecache_proto_msgTypes[1]
mi := &file_fc_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -175,7 +180,7 @@ func (x *Profile) String() string {
func (*Profile) ProtoMessage() {}
func (x *Profile) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[1]
mi := &file_fc_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -995,7 +1000,7 @@ func (b0 Profile_builder) Build() *Profile {
type case_Profile_BlockingMode protoreflect.FieldNumber
func (x case_Profile_BlockingMode) String() string {
md := file_filecache_proto_msgTypes[1].Descriptor()
md := file_fc_proto_msgTypes[1].Descriptor()
if x == 0 {
return "not set"
}
@@ -1005,7 +1010,7 @@ func (x case_Profile_BlockingMode) String() string {
type case_Profile_AdultBlockingMode protoreflect.FieldNumber
func (x case_Profile_AdultBlockingMode) String() string {
md := file_filecache_proto_msgTypes[1].Descriptor()
md := file_fc_proto_msgTypes[1].Descriptor()
if x == 0 {
return "not set"
}
@@ -1015,7 +1020,7 @@ func (x case_Profile_AdultBlockingMode) String() string {
type case_Profile_SafeBrowsingBlockingMode protoreflect.FieldNumber
func (x case_Profile_SafeBrowsingBlockingMode) String() string {
md := file_filecache_proto_msgTypes[1].Descriptor()
md := file_fc_proto_msgTypes[1].Descriptor()
if x == 0 {
return "not set"
}
@@ -1116,7 +1121,7 @@ type AccountCustomDomains struct {
func (x *AccountCustomDomains) Reset() {
*x = AccountCustomDomains{}
mi := &file_filecache_proto_msgTypes[2]
mi := &file_fc_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1128,7 +1133,7 @@ func (x *AccountCustomDomains) String() string {
func (*AccountCustomDomains) ProtoMessage() {}
func (x *AccountCustomDomains) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[2]
mi := &file_fc_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1189,7 +1194,7 @@ type CustomDomainConfig struct {
func (x *CustomDomainConfig) Reset() {
*x = CustomDomainConfig{}
mi := &file_filecache_proto_msgTypes[3]
mi := &file_fc_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1201,7 +1206,7 @@ func (x *CustomDomainConfig) String() string {
func (*CustomDomainConfig) ProtoMessage() {}
func (x *CustomDomainConfig) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[3]
mi := &file_fc_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1341,7 +1346,7 @@ func (b0 CustomDomainConfig_builder) Build() *CustomDomainConfig {
type case_CustomDomainConfig_State protoreflect.FieldNumber
func (x case_CustomDomainConfig_State) String() string {
md := file_filecache_proto_msgTypes[3].Descriptor()
md := file_fc_proto_msgTypes[3].Descriptor()
if x == 0 {
return "not set"
}
@@ -1365,18 +1370,19 @@ func (*customDomainConfig_StateCurrent_) isCustomDomainConfig_State() {}
func (*customDomainConfig_StatePending_) isCustomDomainConfig_State() {}
type FilterConfig struct {
state protoimpl.MessageState `protogen:"opaque.v1"`
xxx_hidden_Custom *FilterConfig_Custom `protobuf:"bytes,1,opt,name=custom,proto3"`
xxx_hidden_Parental *FilterConfig_Parental `protobuf:"bytes,2,opt,name=parental,proto3"`
xxx_hidden_RuleList *FilterConfig_RuleList `protobuf:"bytes,3,opt,name=rule_list,json=ruleList,proto3"`
xxx_hidden_SafeBrowsing *FilterConfig_SafeBrowsing `protobuf:"bytes,4,opt,name=safe_browsing,json=safeBrowsing,proto3"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"opaque.v1"`
xxx_hidden_Custom *FilterConfig_Custom `protobuf:"bytes,1,opt,name=custom,proto3"`
xxx_hidden_Parental *FilterConfig_Parental `protobuf:"bytes,2,opt,name=parental,proto3"`
xxx_hidden_RuleList *FilterConfig_RuleList `protobuf:"bytes,3,opt,name=rule_list,json=ruleList,proto3"`
xxx_hidden_SafeBrowsing *FilterConfig_SafeBrowsing `protobuf:"bytes,4,opt,name=safe_browsing,json=safeBrowsing,proto3"`
xxx_hidden_CategoryFilter *FilterConfig_CategoryFilter `protobuf:"bytes,5,opt,name=category_filter,json=categoryFilter,proto3"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *FilterConfig) Reset() {
*x = FilterConfig{}
mi := &file_filecache_proto_msgTypes[4]
mi := &file_fc_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1388,7 +1394,7 @@ func (x *FilterConfig) String() string {
func (*FilterConfig) ProtoMessage() {}
func (x *FilterConfig) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[4]
mi := &file_fc_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1427,6 +1433,13 @@ func (x *FilterConfig) GetSafeBrowsing() *FilterConfig_SafeBrowsing {
return nil
}
func (x *FilterConfig) GetCategoryFilter() *FilterConfig_CategoryFilter {
if x != nil {
return x.xxx_hidden_CategoryFilter
}
return nil
}
func (x *FilterConfig) SetCustom(v *FilterConfig_Custom) {
x.xxx_hidden_Custom = v
}
@@ -1443,6 +1456,10 @@ func (x *FilterConfig) SetSafeBrowsing(v *FilterConfig_SafeBrowsing) {
x.xxx_hidden_SafeBrowsing = v
}
func (x *FilterConfig) SetCategoryFilter(v *FilterConfig_CategoryFilter) {
x.xxx_hidden_CategoryFilter = v
}
func (x *FilterConfig) HasCustom() bool {
if x == nil {
return false
@@ -1471,6 +1488,13 @@ func (x *FilterConfig) HasSafeBrowsing() bool {
return x.xxx_hidden_SafeBrowsing != nil
}
func (x *FilterConfig) HasCategoryFilter() bool {
if x == nil {
return false
}
return x.xxx_hidden_CategoryFilter != nil
}
func (x *FilterConfig) ClearCustom() {
x.xxx_hidden_Custom = nil
}
@@ -1487,13 +1511,18 @@ func (x *FilterConfig) ClearSafeBrowsing() {
x.xxx_hidden_SafeBrowsing = nil
}
func (x *FilterConfig) ClearCategoryFilter() {
x.xxx_hidden_CategoryFilter = nil
}
type FilterConfig_builder struct {
_ [0]func() // Prevents comparability and use of unkeyed literals for the builder.
Custom *FilterConfig_Custom
Parental *FilterConfig_Parental
RuleList *FilterConfig_RuleList
SafeBrowsing *FilterConfig_SafeBrowsing
Custom *FilterConfig_Custom
Parental *FilterConfig_Parental
RuleList *FilterConfig_RuleList
SafeBrowsing *FilterConfig_SafeBrowsing
CategoryFilter *FilterConfig_CategoryFilter
}
func (b0 FilterConfig_builder) Build() *FilterConfig {
@@ -1504,6 +1533,7 @@ func (b0 FilterConfig_builder) Build() *FilterConfig {
x.xxx_hidden_Parental = b.Parental
x.xxx_hidden_RuleList = b.RuleList
x.xxx_hidden_SafeBrowsing = b.SafeBrowsing
x.xxx_hidden_CategoryFilter = b.CategoryFilter
return m0
}
@@ -1517,7 +1547,7 @@ type DayInterval struct {
func (x *DayInterval) Reset() {
*x = DayInterval{}
mi := &file_filecache_proto_msgTypes[5]
mi := &file_fc_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1529,7 +1559,7 @@ func (x *DayInterval) String() string {
func (*DayInterval) ProtoMessage() {}
func (x *DayInterval) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[5]
mi := &file_fc_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1588,7 +1618,7 @@ type BlockingModeCustomIP struct {
func (x *BlockingModeCustomIP) Reset() {
*x = BlockingModeCustomIP{}
mi := &file_filecache_proto_msgTypes[6]
mi := &file_fc_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1600,7 +1630,7 @@ func (x *BlockingModeCustomIP) String() string {
func (*BlockingModeCustomIP) ProtoMessage() {}
func (x *BlockingModeCustomIP) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[6]
mi := &file_fc_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1657,7 +1687,7 @@ type BlockingModeNXDOMAIN struct {
func (x *BlockingModeNXDOMAIN) Reset() {
*x = BlockingModeNXDOMAIN{}
mi := &file_filecache_proto_msgTypes[7]
mi := &file_fc_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1669,7 +1699,7 @@ func (x *BlockingModeNXDOMAIN) String() string {
func (*BlockingModeNXDOMAIN) ProtoMessage() {}
func (x *BlockingModeNXDOMAIN) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[7]
mi := &file_fc_proto_msgTypes[7]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1700,7 +1730,7 @@ type BlockingModeNullIP struct {
func (x *BlockingModeNullIP) Reset() {
*x = BlockingModeNullIP{}
mi := &file_filecache_proto_msgTypes[8]
mi := &file_fc_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1712,7 +1742,7 @@ func (x *BlockingModeNullIP) String() string {
func (*BlockingModeNullIP) ProtoMessage() {}
func (x *BlockingModeNullIP) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[8]
mi := &file_fc_proto_msgTypes[8]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1743,7 +1773,7 @@ type BlockingModeREFUSED struct {
func (x *BlockingModeREFUSED) Reset() {
*x = BlockingModeREFUSED{}
mi := &file_filecache_proto_msgTypes[9]
mi := &file_fc_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1755,7 +1785,7 @@ func (x *BlockingModeREFUSED) String() string {
func (*BlockingModeREFUSED) ProtoMessage() {}
func (x *BlockingModeREFUSED) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[9]
mi := &file_fc_proto_msgTypes[9]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1793,7 +1823,7 @@ type Device struct {
func (x *Device) Reset() {
*x = Device{}
mi := &file_filecache_proto_msgTypes[10]
mi := &file_fc_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1805,7 +1835,7 @@ func (x *Device) String() string {
func (*Device) ProtoMessage() {}
func (x *Device) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[10]
mi := &file_fc_proto_msgTypes[10]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1947,7 +1977,7 @@ type Access struct {
func (x *Access) Reset() {
*x = Access{}
mi := &file_filecache_proto_msgTypes[11]
mi := &file_fc_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1959,7 +1989,7 @@ func (x *Access) String() string {
func (*Access) ProtoMessage() {}
func (x *Access) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[11]
mi := &file_fc_proto_msgTypes[11]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2074,7 +2104,7 @@ type CidrRange struct {
func (x *CidrRange) Reset() {
*x = CidrRange{}
mi := &file_filecache_proto_msgTypes[12]
mi := &file_fc_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2086,7 +2116,7 @@ func (x *CidrRange) String() string {
func (*CidrRange) ProtoMessage() {}
func (x *CidrRange) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[12]
mi := &file_fc_proto_msgTypes[12]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2148,7 +2178,7 @@ type AuthenticationSettings struct {
func (x *AuthenticationSettings) Reset() {
*x = AuthenticationSettings{}
mi := &file_filecache_proto_msgTypes[13]
mi := &file_fc_proto_msgTypes[13]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2160,7 +2190,7 @@ func (x *AuthenticationSettings) String() string {
func (*AuthenticationSettings) ProtoMessage() {}
func (x *AuthenticationSettings) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[13]
mi := &file_fc_proto_msgTypes[13]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2261,7 +2291,7 @@ func (b0 AuthenticationSettings_builder) Build() *AuthenticationSettings {
type case_AuthenticationSettings_DohPasswordHash protoreflect.FieldNumber
func (x case_AuthenticationSettings_DohPasswordHash) String() string {
md := file_filecache_proto_msgTypes[13].Descriptor()
md := file_fc_proto_msgTypes[13].Descriptor()
if x == 0 {
return "not set"
}
@@ -2289,7 +2319,7 @@ type Ratelimiter struct {
func (x *Ratelimiter) Reset() {
*x = Ratelimiter{}
mi := &file_filecache_proto_msgTypes[14]
mi := &file_fc_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2301,7 +2331,7 @@ func (x *Ratelimiter) String() string {
func (*Ratelimiter) ProtoMessage() {}
func (x *Ratelimiter) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[14]
mi := &file_fc_proto_msgTypes[14]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2377,7 +2407,7 @@ type CustomDomainConfig_StateCurrent struct {
func (x *CustomDomainConfig_StateCurrent) Reset() {
*x = CustomDomainConfig_StateCurrent{}
mi := &file_filecache_proto_msgTypes[15]
mi := &file_fc_proto_msgTypes[15]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2389,7 +2419,7 @@ func (x *CustomDomainConfig_StateCurrent) String() string {
func (*CustomDomainConfig_StateCurrent) ProtoMessage() {}
func (x *CustomDomainConfig_StateCurrent) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[15]
mi := &file_fc_proto_msgTypes[15]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2496,7 +2526,7 @@ type CustomDomainConfig_StatePending struct {
func (x *CustomDomainConfig_StatePending) Reset() {
*x = CustomDomainConfig_StatePending{}
mi := &file_filecache_proto_msgTypes[16]
mi := &file_fc_proto_msgTypes[16]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2508,7 +2538,7 @@ func (x *CustomDomainConfig_StatePending) String() string {
func (*CustomDomainConfig_StatePending) ProtoMessage() {}
func (x *CustomDomainConfig_StatePending) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[16]
mi := &file_fc_proto_msgTypes[16]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2578,7 +2608,7 @@ type FilterConfig_Custom struct {
func (x *FilterConfig_Custom) Reset() {
*x = FilterConfig_Custom{}
mi := &file_filecache_proto_msgTypes[17]
mi := &file_fc_proto_msgTypes[17]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2590,7 +2620,7 @@ func (x *FilterConfig_Custom) String() string {
func (*FilterConfig_Custom) ProtoMessage() {}
func (x *FilterConfig_Custom) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[17]
mi := &file_fc_proto_msgTypes[17]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2653,7 +2683,7 @@ type FilterConfig_Parental struct {
func (x *FilterConfig_Parental) Reset() {
*x = FilterConfig_Parental{}
mi := &file_filecache_proto_msgTypes[18]
mi := &file_fc_proto_msgTypes[18]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2665,7 +2695,7 @@ func (x *FilterConfig_Parental) String() string {
func (*FilterConfig_Parental) ProtoMessage() {}
func (x *FilterConfig_Parental) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[18]
mi := &file_fc_proto_msgTypes[18]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2787,7 +2817,7 @@ type FilterConfig_Schedule struct {
func (x *FilterConfig_Schedule) Reset() {
*x = FilterConfig_Schedule{}
mi := &file_filecache_proto_msgTypes[19]
mi := &file_fc_proto_msgTypes[19]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2799,7 +2829,7 @@ func (x *FilterConfig_Schedule) String() string {
func (*FilterConfig_Schedule) ProtoMessage() {}
func (x *FilterConfig_Schedule) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[19]
mi := &file_fc_proto_msgTypes[19]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -2874,7 +2904,7 @@ type FilterConfig_WeeklySchedule struct {
func (x *FilterConfig_WeeklySchedule) Reset() {
*x = FilterConfig_WeeklySchedule{}
mi := &file_filecache_proto_msgTypes[20]
mi := &file_fc_proto_msgTypes[20]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -2886,7 +2916,7 @@ func (x *FilterConfig_WeeklySchedule) String() string {
func (*FilterConfig_WeeklySchedule) ProtoMessage() {}
func (x *FilterConfig_WeeklySchedule) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[20]
mi := &file_fc_proto_msgTypes[20]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3087,7 +3117,7 @@ type FilterConfig_RuleList struct {
func (x *FilterConfig_RuleList) Reset() {
*x = FilterConfig_RuleList{}
mi := &file_filecache_proto_msgTypes[21]
mi := &file_fc_proto_msgTypes[21]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3099,7 +3129,7 @@ func (x *FilterConfig_RuleList) String() string {
func (*FilterConfig_RuleList) ProtoMessage() {}
func (x *FilterConfig_RuleList) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[21]
mi := &file_fc_proto_msgTypes[21]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3159,7 +3189,7 @@ type FilterConfig_SafeBrowsing struct {
func (x *FilterConfig_SafeBrowsing) Reset() {
*x = FilterConfig_SafeBrowsing{}
mi := &file_filecache_proto_msgTypes[22]
mi := &file_fc_proto_msgTypes[22]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -3171,7 +3201,7 @@ func (x *FilterConfig_SafeBrowsing) String() string {
func (*FilterConfig_SafeBrowsing) ProtoMessage() {}
func (x *FilterConfig_SafeBrowsing) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[22]
mi := &file_fc_proto_msgTypes[22]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -3233,25 +3263,96 @@ func (b0 FilterConfig_SafeBrowsing_builder) Build() *FilterConfig_SafeBrowsing {
return m0
}
var File_filecache_proto protoreflect.FileDescriptor
type FilterConfig_CategoryFilter struct {
state protoimpl.MessageState `protogen:"opaque.v1"`
xxx_hidden_Ids []string `protobuf:"bytes,1,rep,name=ids,proto3"`
xxx_hidden_Enabled bool `protobuf:"varint,2,opt,name=enabled,proto3"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
const file_filecache_proto_rawDesc = "" +
func (x *FilterConfig_CategoryFilter) Reset() {
*x = FilterConfig_CategoryFilter{}
mi := &file_fc_proto_msgTypes[23]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *FilterConfig_CategoryFilter) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FilterConfig_CategoryFilter) ProtoMessage() {}
func (x *FilterConfig_CategoryFilter) ProtoReflect() protoreflect.Message {
mi := &file_fc_proto_msgTypes[23]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
func (x *FilterConfig_CategoryFilter) GetIds() []string {
if x != nil {
return x.xxx_hidden_Ids
}
return nil
}
func (x *FilterConfig_CategoryFilter) GetEnabled() bool {
if x != nil {
return x.xxx_hidden_Enabled
}
return false
}
func (x *FilterConfig_CategoryFilter) SetIds(v []string) {
x.xxx_hidden_Ids = v
}
func (x *FilterConfig_CategoryFilter) SetEnabled(v bool) {
x.xxx_hidden_Enabled = v
}
type FilterConfig_CategoryFilter_builder struct {
_ [0]func() // Prevents comparability and use of unkeyed literals for the builder.
Ids []string
Enabled bool
}
func (b0 FilterConfig_CategoryFilter_builder) Build() *FilterConfig_CategoryFilter {
m0 := &FilterConfig_CategoryFilter{}
b, x := &b0, m0
_, _ = b, x
x.xxx_hidden_Ids = b.Ids
x.xxx_hidden_Enabled = b.Enabled
return m0
}
var File_fc_proto protoreflect.FileDescriptor
const file_fc_proto_rawDesc = "" +
"\n" +
"\x0ffilecache.proto\x12\tprofiledb\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xbb\x01\n" +
"\bfc.proto\x12\x04fcpb\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xb1\x01\n" +
"\tFileCache\x127\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\x12.\n" +
"\bprofiles\x18\x02 \x03(\v2\x12.profiledb.ProfileR\bprofiles\x12+\n" +
"\adevices\x18\x03 \x03(\v2\x11.profiledb.DeviceR\adevices\x12\x18\n" +
"\aversion\x18\x04 \x01(\x05R\aversion\"\xf3\x0f\n" +
"\aProfile\x12F\n" +
"\x0ecustom_domains\x18\x14 \x01(\v2\x1f.profiledb.AccountCustomDomainsR\rcustomDomains\x12<\n" +
"\rfilter_config\x18\x01 \x01(\v2\x17.profiledb.FilterConfigR\ffilterConfig\x12)\n" +
"\x06access\x18\x02 \x01(\v2\x11.profiledb.AccessR\x06access\x12X\n" +
"\x17blocking_mode_custom_ip\x18\x03 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x00R\x14blockingModeCustomIp\x12W\n" +
"\x16blocking_mode_nxdomain\x18\x04 \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x00R\x14blockingModeNxdomain\x12R\n" +
"\x15blocking_mode_null_ip\x18\x05 \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x00R\x12blockingModeNullIp\x12T\n" +
"\x15blocking_mode_refused\x18\x06 \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x00R\x13blockingModeRefused\x128\n" +
"\vratelimiter\x18\a \x01(\v2\x16.profiledb.RatelimiterR\vratelimiter\x12\x1d\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\x12)\n" +
"\bprofiles\x18\x02 \x03(\v2\r.fcpb.ProfileR\bprofiles\x12&\n" +
"\adevices\x18\x03 \x03(\v2\f.fcpb.DeviceR\adevices\x12\x18\n" +
"\aversion\x18\x04 \x01(\x05R\aversion\"\xa3\x0f\n" +
"\aProfile\x12A\n" +
"\x0ecustom_domains\x18\x14 \x01(\v2\x1a.fcpb.AccountCustomDomainsR\rcustomDomains\x127\n" +
"\rfilter_config\x18\x01 \x01(\v2\x12.fcpb.FilterConfigR\ffilterConfig\x12$\n" +
"\x06access\x18\x02 \x01(\v2\f.fcpb.AccessR\x06access\x12S\n" +
"\x17blocking_mode_custom_ip\x18\x03 \x01(\v2\x1a.fcpb.BlockingModeCustomIPH\x00R\x14blockingModeCustomIp\x12R\n" +
"\x16blocking_mode_nxdomain\x18\x04 \x01(\v2\x1a.fcpb.BlockingModeNXDOMAINH\x00R\x14blockingModeNxdomain\x12M\n" +
"\x15blocking_mode_null_ip\x18\x05 \x01(\v2\x18.fcpb.BlockingModeNullIPH\x00R\x12blockingModeNullIp\x12O\n" +
"\x15blocking_mode_refused\x18\x06 \x01(\v2\x19.fcpb.BlockingModeREFUSEDH\x00R\x13blockingModeRefused\x123\n" +
"\vratelimiter\x18\a \x01(\v2\x11.fcpb.RatelimiterR\vratelimiter\x12\x1d\n" +
"\n" +
"account_id\x18\x13 \x01(\tR\taccountId\x12\x1d\n" +
"\n" +
@@ -3267,24 +3368,24 @@ const file_filecache_proto_rawDesc = "" +
"\adeleted\x18\x0f \x01(\bR\adeleted\x12+\n" +
"\x11filtering_enabled\x18\x10 \x01(\bR\x10filteringEnabled\x12$\n" +
"\x0eip_log_enabled\x18\x11 \x01(\bR\fipLogEnabled\x12*\n" +
"\x11query_log_enabled\x18\x12 \x01(\bR\x0fqueryLogEnabled\x12c\n" +
"\x1dadult_blocking_mode_custom_ip\x18\x15 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x01R\x19adultBlockingModeCustomIp\x12b\n" +
"\x1cadult_blocking_mode_nxdomain\x18\x16 \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x01R\x19adultBlockingModeNxdomain\x12]\n" +
"\x1badult_blocking_mode_null_ip\x18\x17 \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x01R\x17adultBlockingModeNullIp\x12_\n" +
"\x1badult_blocking_mode_refused\x18\x18 \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x01R\x18adultBlockingModeRefused\x12r\n" +
"%safe_browsing_blocking_mode_custom_ip\x18\x19 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x02R safeBrowsingBlockingModeCustomIp\x12q\n" +
"$safe_browsing_blocking_mode_nxdomain\x18\x1a \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x02R safeBrowsingBlockingModeNxdomain\x12l\n" +
"#safe_browsing_blocking_mode_null_ip\x18\x1b \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x02R\x1esafeBrowsingBlockingModeNullIp\x12n\n" +
"#safe_browsing_blocking_mode_refused\x18\x1c \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x02R\x1fsafeBrowsingBlockingModeRefusedB\x0f\n" +
"\x11query_log_enabled\x18\x12 \x01(\bR\x0fqueryLogEnabled\x12^\n" +
"\x1dadult_blocking_mode_custom_ip\x18\x15 \x01(\v2\x1a.fcpb.BlockingModeCustomIPH\x01R\x19adultBlockingModeCustomIp\x12]\n" +
"\x1cadult_blocking_mode_nxdomain\x18\x16 \x01(\v2\x1a.fcpb.BlockingModeNXDOMAINH\x01R\x19adultBlockingModeNxdomain\x12X\n" +
"\x1badult_blocking_mode_null_ip\x18\x17 \x01(\v2\x18.fcpb.BlockingModeNullIPH\x01R\x17adultBlockingModeNullIp\x12Z\n" +
"\x1badult_blocking_mode_refused\x18\x18 \x01(\v2\x19.fcpb.BlockingModeREFUSEDH\x01R\x18adultBlockingModeRefused\x12m\n" +
"%safe_browsing_blocking_mode_custom_ip\x18\x19 \x01(\v2\x1a.fcpb.BlockingModeCustomIPH\x02R safeBrowsingBlockingModeCustomIp\x12l\n" +
"$safe_browsing_blocking_mode_nxdomain\x18\x1a \x01(\v2\x1a.fcpb.BlockingModeNXDOMAINH\x02R safeBrowsingBlockingModeNxdomain\x12g\n" +
"#safe_browsing_blocking_mode_null_ip\x18\x1b \x01(\v2\x18.fcpb.BlockingModeNullIPH\x02R\x1esafeBrowsingBlockingModeNullIp\x12i\n" +
"#safe_browsing_blocking_mode_refused\x18\x1c \x01(\v2\x19.fcpb.BlockingModeREFUSEDH\x02R\x1fsafeBrowsingBlockingModeRefusedB\x0f\n" +
"\rblocking_modeB\x15\n" +
"\x13adult_blocking_modeB\x1d\n" +
"\x1bsafe_browsing_blocking_mode\"i\n" +
"\x14AccountCustomDomains\x127\n" +
"\adomains\x18\x01 \x03(\v2\x1d.profiledb.CustomDomainConfigR\adomains\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"\x85\x04\n" +
"\x12CustomDomainConfig\x12Q\n" +
"\rstate_current\x18\x01 \x01(\v2*.profiledb.CustomDomainConfig.StateCurrentH\x00R\fstateCurrent\x12Q\n" +
"\rstate_pending\x18\x02 \x01(\v2*.profiledb.CustomDomainConfig.StatePendingH\x00R\fstatePending\x12\x18\n" +
"\x1bsafe_browsing_blocking_mode\"d\n" +
"\x14AccountCustomDomains\x122\n" +
"\adomains\x18\x01 \x03(\v2\x18.fcpb.CustomDomainConfigR\adomains\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"\xfb\x03\n" +
"\x12CustomDomainConfig\x12L\n" +
"\rstate_current\x18\x01 \x01(\v2%.fcpb.CustomDomainConfig.StateCurrentH\x00R\fstateCurrent\x12L\n" +
"\rstate_pending\x18\x02 \x01(\v2%.fcpb.CustomDomainConfig.StatePendingH\x00R\fstatePending\x12\x18\n" +
"\adomains\x18\x03 \x03(\tR\adomains\x1a\xb9\x01\n" +
"\fStateCurrent\x129\n" +
"\n" +
@@ -3295,41 +3396,45 @@ const file_filecache_proto_rawDesc = "" +
"\fStatePending\x122\n" +
"\x06expire\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\x06expire\x12&\n" +
"\x0fwell_known_path\x18\x02 \x01(\tR\rwellKnownPathB\a\n" +
"\x05state\"\xa9\n" +
"\x05state\"\xf2\n" +
"\n" +
"\fFilterConfig\x126\n" +
"\x06custom\x18\x01 \x01(\v2\x1e.profiledb.FilterConfig.CustomR\x06custom\x12<\n" +
"\bparental\x18\x02 \x01(\v2 .profiledb.FilterConfig.ParentalR\bparental\x12=\n" +
"\trule_list\x18\x03 \x01(\v2 .profiledb.FilterConfig.RuleListR\bruleList\x12I\n" +
"\rsafe_browsing\x18\x04 \x01(\v2$.profiledb.FilterConfig.SafeBrowsingR\fsafeBrowsing\x1aD\n" +
"\fFilterConfig\x121\n" +
"\x06custom\x18\x01 \x01(\v2\x19.fcpb.FilterConfig.CustomR\x06custom\x127\n" +
"\bparental\x18\x02 \x01(\v2\x1b.fcpb.FilterConfig.ParentalR\bparental\x128\n" +
"\trule_list\x18\x03 \x01(\v2\x1b.fcpb.FilterConfig.RuleListR\bruleList\x12D\n" +
"\rsafe_browsing\x18\x04 \x01(\v2\x1f.fcpb.FilterConfig.SafeBrowsingR\fsafeBrowsing\x12J\n" +
"\x0fcategory_filter\x18\x05 \x01(\v2!.fcpb.FilterConfig.CategoryFilterR\x0ecategoryFilter\x1aD\n" +
"\x06Custom\x12\x14\n" +
"\x05rules\x18\x03 \x03(\tR\x05rules\x12\x18\n" +
"\aenabled\x18\x04 \x01(\bR\aenabledJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03\x1a\xcc\x02\n" +
"\bParental\x12G\n" +
"\x0epause_schedule\x18\x01 \x01(\v2 .profiledb.FilterConfig.ScheduleR\rpauseSchedule\x12)\n" +
"\aenabled\x18\x04 \x01(\bR\aenabledJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03\x1a\xc7\x02\n" +
"\bParental\x12B\n" +
"\x0epause_schedule\x18\x01 \x01(\v2\x1b.fcpb.FilterConfig.ScheduleR\rpauseSchedule\x12)\n" +
"\x10blocked_services\x18\x02 \x03(\tR\x0fblockedServices\x12\x18\n" +
"\aenabled\x18\x03 \x01(\bR\aenabled\x124\n" +
"\x16adult_blocking_enabled\x18\x04 \x01(\bR\x14adultBlockingEnabled\x12=\n" +
"\x1bsafe_search_general_enabled\x18\x05 \x01(\bR\x18safeSearchGeneralEnabled\x12=\n" +
"\x1bsafe_search_youtube_enabled\x18\x06 \x01(\bR\x18safeSearchYoutubeEnabled\x1ac\n" +
"\bSchedule\x12:\n" +
"\x04week\x18\x01 \x01(\v2&.profiledb.FilterConfig.WeeklyScheduleR\x04week\x12\x1b\n" +
"\ttime_zone\x18\x02 \x01(\tR\btimeZone\x1a\xb6\x02\n" +
"\x0eWeeklySchedule\x12(\n" +
"\x03mon\x18\x01 \x01(\v2\x16.profiledb.DayIntervalR\x03mon\x12(\n" +
"\x03tue\x18\x02 \x01(\v2\x16.profiledb.DayIntervalR\x03tue\x12(\n" +
"\x03wed\x18\x03 \x01(\v2\x16.profiledb.DayIntervalR\x03wed\x12(\n" +
"\x03thu\x18\x04 \x01(\v2\x16.profiledb.DayIntervalR\x03thu\x12(\n" +
"\x03fri\x18\x05 \x01(\v2\x16.profiledb.DayIntervalR\x03fri\x12(\n" +
"\x03sat\x18\x06 \x01(\v2\x16.profiledb.DayIntervalR\x03sat\x12(\n" +
"\x03sun\x18\a \x01(\v2\x16.profiledb.DayIntervalR\x03sun\x1a6\n" +
"\x1bsafe_search_youtube_enabled\x18\x06 \x01(\bR\x18safeSearchYoutubeEnabled\x1a^\n" +
"\bSchedule\x125\n" +
"\x04week\x18\x01 \x01(\v2!.fcpb.FilterConfig.WeeklyScheduleR\x04week\x12\x1b\n" +
"\ttime_zone\x18\x02 \x01(\tR\btimeZone\x1a\x93\x02\n" +
"\x0eWeeklySchedule\x12#\n" +
"\x03mon\x18\x01 \x01(\v2\x11.fcpb.DayIntervalR\x03mon\x12#\n" +
"\x03tue\x18\x02 \x01(\v2\x11.fcpb.DayIntervalR\x03tue\x12#\n" +
"\x03wed\x18\x03 \x01(\v2\x11.fcpb.DayIntervalR\x03wed\x12#\n" +
"\x03thu\x18\x04 \x01(\v2\x11.fcpb.DayIntervalR\x03thu\x12#\n" +
"\x03fri\x18\x05 \x01(\v2\x11.fcpb.DayIntervalR\x03fri\x12#\n" +
"\x03sat\x18\x06 \x01(\v2\x11.fcpb.DayIntervalR\x03sat\x12#\n" +
"\x03sun\x18\a \x01(\v2\x11.fcpb.DayIntervalR\x03sun\x1a6\n" +
"\bRuleList\x12\x10\n" +
"\x03ids\x18\x01 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\x1a\xad\x01\n" +
"\fSafeBrowsing\x12\x18\n" +
"\aenabled\x18\x01 \x01(\bR\aenabled\x12:\n" +
"\x19dangerous_domains_enabled\x18\x02 \x01(\bR\x17dangerousDomainsEnabled\x12G\n" +
" newly_registered_domains_enabled\x18\x03 \x01(\bR\x1dnewlyRegisteredDomainsEnabled\"5\n" +
" newly_registered_domains_enabled\x18\x03 \x01(\bR\x1dnewlyRegisteredDomainsEnabled\x1a<\n" +
"\x0eCategoryFilter\x12\x10\n" +
"\x03ids\x18\x01 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"5\n" +
"\vDayInterval\x12\x14\n" +
"\x05start\x18\x01 \x01(\rR\x05start\x12\x10\n" +
"\x03end\x18\x02 \x01(\rR\x03end\">\n" +
@@ -3338,21 +3443,21 @@ const file_filecache_proto_rawDesc = "" +
"\x04ipv6\x18\x02 \x03(\fR\x04ipv6\"\x16\n" +
"\x14BlockingModeNXDOMAIN\"\x14\n" +
"\x12BlockingModeNullIP\"\x15\n" +
"\x13BlockingModeREFUSED\"\xa6\x02\n" +
"\x06Device\x12I\n" +
"\x0eauthentication\x18\x06 \x01(\v2!.profiledb.AuthenticationSettingsR\x0eauthentication\x12\x1b\n" +
"\x13BlockingModeREFUSED\"\xa1\x02\n" +
"\x06Device\x12D\n" +
"\x0eauthentication\x18\x06 \x01(\v2\x1c.fcpb.AuthenticationSettingsR\x0eauthentication\x12\x1b\n" +
"\tdevice_id\x18\x01 \x01(\tR\bdeviceId\x12\x1f\n" +
"\vdevice_name\x18\x03 \x01(\tR\n" +
"deviceName\x12$\n" +
"\x0ehuman_id_lower\x18\a \x01(\tR\fhumanIdLower\x12\x1b\n" +
"\tlinked_ip\x18\x02 \x01(\fR\blinkedIp\x12#\n" +
"\rdedicated_ips\x18\x04 \x03(\fR\fdedicatedIps\x12+\n" +
"\x11filtering_enabled\x18\x05 \x01(\bR\x10filteringEnabled\"\xad\x02\n" +
"\x11filtering_enabled\x18\x05 \x01(\bR\x10filteringEnabled\"\xa3\x02\n" +
"\x06Access\x12#\n" +
"\rallowlist_asn\x18\x04 \x03(\rR\fallowlistAsn\x12;\n" +
"\x0eallowlist_cidr\x18\x01 \x03(\v2\x14.profiledb.CidrRangeR\rallowlistCidr\x12#\n" +
"\rblocklist_asn\x18\x05 \x03(\rR\fblocklistAsn\x12;\n" +
"\x0eblocklist_cidr\x18\x02 \x03(\v2\x14.profiledb.CidrRangeR\rblocklistCidr\x124\n" +
"\rallowlist_asn\x18\x04 \x03(\rR\fallowlistAsn\x126\n" +
"\x0eallowlist_cidr\x18\x01 \x03(\v2\x0f.fcpb.CidrRangeR\rallowlistCidr\x12#\n" +
"\rblocklist_asn\x18\x05 \x03(\rR\fblocklistAsn\x126\n" +
"\x0eblocklist_cidr\x18\x02 \x03(\v2\x0f.fcpb.CidrRangeR\rblocklistCidr\x124\n" +
"\x16blocklist_domain_rules\x18\x03 \x03(\tR\x14blocklistDomainRules\x12)\n" +
"\x10standard_enabled\x18\x06 \x01(\bR\x0fstandardEnabled\"=\n" +
"\tCidrRange\x12\x18\n" +
@@ -3361,98 +3466,100 @@ const file_filecache_proto_rawDesc = "" +
"\x16AuthenticationSettings\x12\"\n" +
"\rdoh_auth_only\x18\x01 \x01(\bR\vdohAuthOnly\x122\n" +
"\x14password_hash_bcrypt\x18\x02 \x01(\fH\x00R\x12passwordHashBcryptB\x13\n" +
"\x11doh_password_hash\"p\n" +
"\vRatelimiter\x125\n" +
"\vclient_cidr\x18\x01 \x03(\v2\x14.profiledb.CidrRangeR\n" +
"\x11doh_password_hash\"k\n" +
"\vRatelimiter\x120\n" +
"\vclient_cidr\x18\x01 \x03(\v2\x0f.fcpb.CidrRangeR\n" +
"clientCidr\x12\x10\n" +
"\x03rps\x18\x02 \x01(\rR\x03rps\x12\x18\n" +
"\aenabled\x18\x03 \x01(\bR\aenabledb\x06proto3"
var file_filecache_proto_msgTypes = make([]protoimpl.MessageInfo, 23)
var file_filecache_proto_goTypes = []any{
(*FileCache)(nil), // 0: profiledb.FileCache
(*Profile)(nil), // 1: profiledb.Profile
(*AccountCustomDomains)(nil), // 2: profiledb.AccountCustomDomains
(*CustomDomainConfig)(nil), // 3: profiledb.CustomDomainConfig
(*FilterConfig)(nil), // 4: profiledb.FilterConfig
(*DayInterval)(nil), // 5: profiledb.DayInterval
(*BlockingModeCustomIP)(nil), // 6: profiledb.BlockingModeCustomIP
(*BlockingModeNXDOMAIN)(nil), // 7: profiledb.BlockingModeNXDOMAIN
(*BlockingModeNullIP)(nil), // 8: profiledb.BlockingModeNullIP
(*BlockingModeREFUSED)(nil), // 9: profiledb.BlockingModeREFUSED
(*Device)(nil), // 10: profiledb.Device
(*Access)(nil), // 11: profiledb.Access
(*CidrRange)(nil), // 12: profiledb.CidrRange
(*AuthenticationSettings)(nil), // 13: profiledb.AuthenticationSettings
(*Ratelimiter)(nil), // 14: profiledb.Ratelimiter
(*CustomDomainConfig_StateCurrent)(nil), // 15: profiledb.CustomDomainConfig.StateCurrent
(*CustomDomainConfig_StatePending)(nil), // 16: profiledb.CustomDomainConfig.StatePending
(*FilterConfig_Custom)(nil), // 17: profiledb.FilterConfig.Custom
(*FilterConfig_Parental)(nil), // 18: profiledb.FilterConfig.Parental
(*FilterConfig_Schedule)(nil), // 19: profiledb.FilterConfig.Schedule
(*FilterConfig_WeeklySchedule)(nil), // 20: profiledb.FilterConfig.WeeklySchedule
(*FilterConfig_RuleList)(nil), // 21: profiledb.FilterConfig.RuleList
(*FilterConfig_SafeBrowsing)(nil), // 22: profiledb.FilterConfig.SafeBrowsing
(*timestamppb.Timestamp)(nil), // 23: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 24: google.protobuf.Duration
var file_fc_proto_msgTypes = make([]protoimpl.MessageInfo, 24)
var file_fc_proto_goTypes = []any{
(*FileCache)(nil), // 0: fcpb.FileCache
(*Profile)(nil), // 1: fcpb.Profile
(*AccountCustomDomains)(nil), // 2: fcpb.AccountCustomDomains
(*CustomDomainConfig)(nil), // 3: fcpb.CustomDomainConfig
(*FilterConfig)(nil), // 4: fcpb.FilterConfig
(*DayInterval)(nil), // 5: fcpb.DayInterval
(*BlockingModeCustomIP)(nil), // 6: fcpb.BlockingModeCustomIP
(*BlockingModeNXDOMAIN)(nil), // 7: fcpb.BlockingModeNXDOMAIN
(*BlockingModeNullIP)(nil), // 8: fcpb.BlockingModeNullIP
(*BlockingModeREFUSED)(nil), // 9: fcpb.BlockingModeREFUSED
(*Device)(nil), // 10: fcpb.Device
(*Access)(nil), // 11: fcpb.Access
(*CidrRange)(nil), // 12: fcpb.CidrRange
(*AuthenticationSettings)(nil), // 13: fcpb.AuthenticationSettings
(*Ratelimiter)(nil), // 14: fcpb.Ratelimiter
(*CustomDomainConfig_StateCurrent)(nil), // 15: fcpb.CustomDomainConfig.StateCurrent
(*CustomDomainConfig_StatePending)(nil), // 16: fcpb.CustomDomainConfig.StatePending
(*FilterConfig_Custom)(nil), // 17: fcpb.FilterConfig.Custom
(*FilterConfig_Parental)(nil), // 18: fcpb.FilterConfig.Parental
(*FilterConfig_Schedule)(nil), // 19: fcpb.FilterConfig.Schedule
(*FilterConfig_WeeklySchedule)(nil), // 20: fcpb.FilterConfig.WeeklySchedule
(*FilterConfig_RuleList)(nil), // 21: fcpb.FilterConfig.RuleList
(*FilterConfig_SafeBrowsing)(nil), // 22: fcpb.FilterConfig.SafeBrowsing
(*FilterConfig_CategoryFilter)(nil), // 23: fcpb.FilterConfig.CategoryFilter
(*timestamppb.Timestamp)(nil), // 24: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 25: google.protobuf.Duration
}
var file_filecache_proto_depIdxs = []int32{
23, // 0: profiledb.FileCache.sync_time:type_name -> google.protobuf.Timestamp
1, // 1: profiledb.FileCache.profiles:type_name -> profiledb.Profile
10, // 2: profiledb.FileCache.devices:type_name -> profiledb.Device
2, // 3: profiledb.Profile.custom_domains:type_name -> profiledb.AccountCustomDomains
4, // 4: profiledb.Profile.filter_config:type_name -> profiledb.FilterConfig
11, // 5: profiledb.Profile.access:type_name -> profiledb.Access
6, // 6: profiledb.Profile.blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 7: profiledb.Profile.blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 8: profiledb.Profile.blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 9: profiledb.Profile.blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
14, // 10: profiledb.Profile.ratelimiter:type_name -> profiledb.Ratelimiter
24, // 11: profiledb.Profile.filtered_response_ttl:type_name -> google.protobuf.Duration
6, // 12: profiledb.Profile.adult_blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 13: profiledb.Profile.adult_blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 14: profiledb.Profile.adult_blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 15: profiledb.Profile.adult_blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
6, // 16: profiledb.Profile.safe_browsing_blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 17: profiledb.Profile.safe_browsing_blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 18: profiledb.Profile.safe_browsing_blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 19: profiledb.Profile.safe_browsing_blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
3, // 20: profiledb.AccountCustomDomains.domains:type_name -> profiledb.CustomDomainConfig
15, // 21: profiledb.CustomDomainConfig.state_current:type_name -> profiledb.CustomDomainConfig.StateCurrent
16, // 22: profiledb.CustomDomainConfig.state_pending:type_name -> profiledb.CustomDomainConfig.StatePending
17, // 23: profiledb.FilterConfig.custom:type_name -> profiledb.FilterConfig.Custom
18, // 24: profiledb.FilterConfig.parental:type_name -> profiledb.FilterConfig.Parental
21, // 25: profiledb.FilterConfig.rule_list:type_name -> profiledb.FilterConfig.RuleList
22, // 26: profiledb.FilterConfig.safe_browsing:type_name -> profiledb.FilterConfig.SafeBrowsing
13, // 27: profiledb.Device.authentication:type_name -> profiledb.AuthenticationSettings
12, // 28: profiledb.Access.allowlist_cidr:type_name -> profiledb.CidrRange
12, // 29: profiledb.Access.blocklist_cidr:type_name -> profiledb.CidrRange
12, // 30: profiledb.Ratelimiter.client_cidr:type_name -> profiledb.CidrRange
23, // 31: profiledb.CustomDomainConfig.StateCurrent.not_before:type_name -> google.protobuf.Timestamp
23, // 32: profiledb.CustomDomainConfig.StateCurrent.not_after:type_name -> google.protobuf.Timestamp
23, // 33: profiledb.CustomDomainConfig.StatePending.expire:type_name -> google.protobuf.Timestamp
19, // 34: profiledb.FilterConfig.Parental.pause_schedule:type_name -> profiledb.FilterConfig.Schedule
20, // 35: profiledb.FilterConfig.Schedule.week:type_name -> profiledb.FilterConfig.WeeklySchedule
5, // 36: profiledb.FilterConfig.WeeklySchedule.mon:type_name -> profiledb.DayInterval
5, // 37: profiledb.FilterConfig.WeeklySchedule.tue:type_name -> profiledb.DayInterval
5, // 38: profiledb.FilterConfig.WeeklySchedule.wed:type_name -> profiledb.DayInterval
5, // 39: profiledb.FilterConfig.WeeklySchedule.thu:type_name -> profiledb.DayInterval
5, // 40: profiledb.FilterConfig.WeeklySchedule.fri:type_name -> profiledb.DayInterval
5, // 41: profiledb.FilterConfig.WeeklySchedule.sat:type_name -> profiledb.DayInterval
5, // 42: profiledb.FilterConfig.WeeklySchedule.sun:type_name -> profiledb.DayInterval
43, // [43:43] is the sub-list for method output_type
43, // [43:43] is the sub-list for method input_type
43, // [43:43] is the sub-list for extension type_name
43, // [43:43] is the sub-list for extension extendee
0, // [0:43] is the sub-list for field type_name
var file_fc_proto_depIdxs = []int32{
24, // 0: fcpb.FileCache.sync_time:type_name -> google.protobuf.Timestamp
1, // 1: fcpb.FileCache.profiles:type_name -> fcpb.Profile
10, // 2: fcpb.FileCache.devices:type_name -> fcpb.Device
2, // 3: fcpb.Profile.custom_domains:type_name -> fcpb.AccountCustomDomains
4, // 4: fcpb.Profile.filter_config:type_name -> fcpb.FilterConfig
11, // 5: fcpb.Profile.access:type_name -> fcpb.Access
6, // 6: fcpb.Profile.blocking_mode_custom_ip:type_name -> fcpb.BlockingModeCustomIP
7, // 7: fcpb.Profile.blocking_mode_nxdomain:type_name -> fcpb.BlockingModeNXDOMAIN
8, // 8: fcpb.Profile.blocking_mode_null_ip:type_name -> fcpb.BlockingModeNullIP
9, // 9: fcpb.Profile.blocking_mode_refused:type_name -> fcpb.BlockingModeREFUSED
14, // 10: fcpb.Profile.ratelimiter:type_name -> fcpb.Ratelimiter
25, // 11: fcpb.Profile.filtered_response_ttl:type_name -> google.protobuf.Duration
6, // 12: fcpb.Profile.adult_blocking_mode_custom_ip:type_name -> fcpb.BlockingModeCustomIP
7, // 13: fcpb.Profile.adult_blocking_mode_nxdomain:type_name -> fcpb.BlockingModeNXDOMAIN
8, // 14: fcpb.Profile.adult_blocking_mode_null_ip:type_name -> fcpb.BlockingModeNullIP
9, // 15: fcpb.Profile.adult_blocking_mode_refused:type_name -> fcpb.BlockingModeREFUSED
6, // 16: fcpb.Profile.safe_browsing_blocking_mode_custom_ip:type_name -> fcpb.BlockingModeCustomIP
7, // 17: fcpb.Profile.safe_browsing_blocking_mode_nxdomain:type_name -> fcpb.BlockingModeNXDOMAIN
8, // 18: fcpb.Profile.safe_browsing_blocking_mode_null_ip:type_name -> fcpb.BlockingModeNullIP
9, // 19: fcpb.Profile.safe_browsing_blocking_mode_refused:type_name -> fcpb.BlockingModeREFUSED
3, // 20: fcpb.AccountCustomDomains.domains:type_name -> fcpb.CustomDomainConfig
15, // 21: fcpb.CustomDomainConfig.state_current:type_name -> fcpb.CustomDomainConfig.StateCurrent
16, // 22: fcpb.CustomDomainConfig.state_pending:type_name -> fcpb.CustomDomainConfig.StatePending
17, // 23: fcpb.FilterConfig.custom:type_name -> fcpb.FilterConfig.Custom
18, // 24: fcpb.FilterConfig.parental:type_name -> fcpb.FilterConfig.Parental
21, // 25: fcpb.FilterConfig.rule_list:type_name -> fcpb.FilterConfig.RuleList
22, // 26: fcpb.FilterConfig.safe_browsing:type_name -> fcpb.FilterConfig.SafeBrowsing
23, // 27: fcpb.FilterConfig.category_filter:type_name -> fcpb.FilterConfig.CategoryFilter
13, // 28: fcpb.Device.authentication:type_name -> fcpb.AuthenticationSettings
12, // 29: fcpb.Access.allowlist_cidr:type_name -> fcpb.CidrRange
12, // 30: fcpb.Access.blocklist_cidr:type_name -> fcpb.CidrRange
12, // 31: fcpb.Ratelimiter.client_cidr:type_name -> fcpb.CidrRange
24, // 32: fcpb.CustomDomainConfig.StateCurrent.not_before:type_name -> google.protobuf.Timestamp
24, // 33: fcpb.CustomDomainConfig.StateCurrent.not_after:type_name -> google.protobuf.Timestamp
24, // 34: fcpb.CustomDomainConfig.StatePending.expire:type_name -> google.protobuf.Timestamp
19, // 35: fcpb.FilterConfig.Parental.pause_schedule:type_name -> fcpb.FilterConfig.Schedule
20, // 36: fcpb.FilterConfig.Schedule.week:type_name -> fcpb.FilterConfig.WeeklySchedule
5, // 37: fcpb.FilterConfig.WeeklySchedule.mon:type_name -> fcpb.DayInterval
5, // 38: fcpb.FilterConfig.WeeklySchedule.tue:type_name -> fcpb.DayInterval
5, // 39: fcpb.FilterConfig.WeeklySchedule.wed:type_name -> fcpb.DayInterval
5, // 40: fcpb.FilterConfig.WeeklySchedule.thu:type_name -> fcpb.DayInterval
5, // 41: fcpb.FilterConfig.WeeklySchedule.fri:type_name -> fcpb.DayInterval
5, // 42: fcpb.FilterConfig.WeeklySchedule.sat:type_name -> fcpb.DayInterval
5, // 43: fcpb.FilterConfig.WeeklySchedule.sun:type_name -> fcpb.DayInterval
44, // [44:44] is the sub-list for method output_type
44, // [44:44] is the sub-list for method input_type
44, // [44:44] is the sub-list for extension type_name
44, // [44:44] is the sub-list for extension extendee
0, // [0:44] is the sub-list for field type_name
}
func init() { file_filecache_proto_init() }
func file_filecache_proto_init() {
if File_filecache_proto != nil {
func init() { file_fc_proto_init() }
func file_fc_proto_init() {
if File_fc_proto != nil {
return
}
file_filecache_proto_msgTypes[1].OneofWrappers = []any{
file_fc_proto_msgTypes[1].OneofWrappers = []any{
(*profile_BlockingModeCustomIp)(nil),
(*profile_BlockingModeNxdomain)(nil),
(*profile_BlockingModeNullIp)(nil),
@@ -3466,28 +3573,28 @@ func file_filecache_proto_init() {
(*profile_SafeBrowsingBlockingModeNullIp)(nil),
(*profile_SafeBrowsingBlockingModeRefused)(nil),
}
file_filecache_proto_msgTypes[3].OneofWrappers = []any{
file_fc_proto_msgTypes[3].OneofWrappers = []any{
(*customDomainConfig_StateCurrent_)(nil),
(*customDomainConfig_StatePending_)(nil),
}
file_filecache_proto_msgTypes[13].OneofWrappers = []any{
file_fc_proto_msgTypes[13].OneofWrappers = []any{
(*authenticationSettings_PasswordHashBcrypt)(nil),
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_filecache_proto_rawDesc), len(file_filecache_proto_rawDesc)),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_fc_proto_rawDesc), len(file_fc_proto_rawDesc)),
NumEnums: 0,
NumMessages: 23,
NumMessages: 24,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_filecache_proto_goTypes,
DependencyIndexes: file_filecache_proto_depIdxs,
MessageInfos: file_filecache_proto_msgTypes,
GoTypes: file_fc_proto_goTypes,
DependencyIndexes: file_fc_proto_depIdxs,
MessageInfos: file_fc_proto_msgTypes,
}.Build()
File_filecache_proto = out.File
file_filecache_proto_goTypes = nil
file_filecache_proto_depIdxs = nil
File_fc_proto = out.File
file_fc_proto_goTypes = nil
file_fc_proto_depIdxs = nil
}

View File

@@ -0,0 +1,196 @@
// NOTE: Keep in sync with the filecachepb/filecache.proto
//
// TODO(f.setrakov): Remove filecachepb/filecache.proto when migration to opaque
// api will be finished.
syntax = "proto3";
package fcpb;
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
message FileCache {
google.protobuf.Timestamp sync_time = 1;
repeated Profile profiles = 2;
repeated Device devices = 3;
int32 version = 4;
}
message Profile {
AccountCustomDomains custom_domains = 20;
FilterConfig filter_config = 1;
Access access = 2;
oneof blocking_mode {
BlockingModeCustomIP blocking_mode_custom_ip = 3;
BlockingModeNXDOMAIN blocking_mode_nxdomain = 4;
BlockingModeNullIP blocking_mode_null_ip = 5;
BlockingModeREFUSED blocking_mode_refused = 6;
}
Ratelimiter ratelimiter = 7;
string account_id = 19;
string profile_id = 8;
repeated string device_ids = 9;
google.protobuf.Duration filtered_response_ttl = 10;
bool auto_devices_enabled = 11;
bool block_chrome_prefetch = 12;
bool block_firefox_canary = 13;
bool block_private_relay = 14;
bool deleted = 15;
bool filtering_enabled = 16;
bool ip_log_enabled = 17;
bool query_log_enabled = 18;
oneof adult_blocking_mode {
BlockingModeCustomIP adult_blocking_mode_custom_ip = 21;
BlockingModeNXDOMAIN adult_blocking_mode_nxdomain = 22;
BlockingModeNullIP adult_blocking_mode_null_ip = 23;
BlockingModeREFUSED adult_blocking_mode_refused = 24;
}
oneof safe_browsing_blocking_mode {
BlockingModeCustomIP safe_browsing_blocking_mode_custom_ip = 25;
BlockingModeNXDOMAIN safe_browsing_blocking_mode_nxdomain = 26;
BlockingModeNullIP safe_browsing_blocking_mode_null_ip = 27;
BlockingModeREFUSED safe_browsing_blocking_mode_refused = 28;
}
}
message AccountCustomDomains {
repeated CustomDomainConfig domains = 1;
bool enabled = 2;
}
message CustomDomainConfig {
message StateCurrent {
google.protobuf.Timestamp not_before = 1;
google.protobuf.Timestamp not_after = 2;
string cert_name = 3;
bool enabled = 4;
}
message StatePending {
google.protobuf.Timestamp expire = 1;
string well_known_path = 2;
}
oneof state {
StateCurrent state_current = 1;
StatePending state_pending = 2;
}
repeated string domains = 3;
}
message FilterConfig {
message Custom {
reserved 1;
reserved 2;
repeated string rules = 3;
bool enabled = 4;
}
message Parental {
Schedule pause_schedule = 1;
repeated string blocked_services = 2;
bool enabled = 3;
bool adult_blocking_enabled = 4;
bool safe_search_general_enabled = 5;
bool safe_search_youtube_enabled = 6;
}
message Schedule {
WeeklySchedule week = 1;
string time_zone = 2;
}
message WeeklySchedule {
DayInterval mon = 1;
DayInterval tue = 2;
DayInterval wed = 3;
DayInterval thu = 4;
DayInterval fri = 5;
DayInterval sat = 6;
DayInterval sun = 7;
}
message RuleList {
repeated string ids = 1;
bool enabled = 2;
}
message SafeBrowsing {
bool enabled = 1;
bool dangerous_domains_enabled = 2;
bool newly_registered_domains_enabled = 3;
}
message CategoryFilter {
repeated string ids = 1;
bool enabled = 2;
}
Custom custom = 1;
Parental parental = 2;
RuleList rule_list = 3;
SafeBrowsing safe_browsing = 4;
CategoryFilter category_filter = 5;
}
message DayInterval {
uint32 start = 1;
uint32 end = 2;
}
message BlockingModeCustomIP {
repeated bytes ipv4 = 1;
repeated bytes ipv6 = 2;
}
message BlockingModeNXDOMAIN {}
message BlockingModeNullIP {}
message BlockingModeREFUSED {}
message Device {
AuthenticationSettings authentication = 6;
string device_id = 1;
string device_name = 3;
string human_id_lower = 7;
bytes linked_ip = 2;
repeated bytes dedicated_ips = 4;
bool filtering_enabled = 5;
}
message Access {
repeated uint32 allowlist_asn = 4;
repeated CidrRange allowlist_cidr = 1;
repeated uint32 blocklist_asn = 5;
repeated CidrRange blocklist_cidr = 2;
repeated string blocklist_domain_rules = 3;
bool standard_enabled = 6;
}
message CidrRange {
bytes address = 1;
uint32 prefix = 2;
}
message AuthenticationSettings {
bool doh_auth_only = 1;
oneof doh_password_hash {
bytes password_hash_bcrypt = 2;
}
}
message Ratelimiter {
repeated CidrRange client_cidr = 1;
uint32 rps = 2;
bool enabled = 3;
}

View File

@@ -0,0 +1,58 @@
package fcpb_test
import (
"fmt"
"net/netip"
"testing"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/profiledbtest"
"github.com/stretchr/testify/assert"
)
func TestCIDRRangesToPrefixes(t *testing.T) {
invalidAddr := []byte{0, 1, 2, 3, 4}
testCases := []struct {
name string
wantPanicMsg string
in []*fcpb.CidrRange
want []netip.Prefix
}{{
name: "ipv4",
in: []*fcpb.CidrRange{fcpb.CidrRange_builder{
Address: profiledbtest.IPv4Bytes,
Prefix: 24,
}.Build()},
want: []netip.Prefix{profiledbtest.IPv4Prefix},
}, {
name: "ipv6",
in: []*fcpb.CidrRange{fcpb.CidrRange_builder{
Address: profiledbtest.IPv6Bytes,
Prefix: 32,
}.Build()},
want: []netip.Prefix{profiledbtest.IPv6Prefix},
}, {
name: "panic",
in: []*fcpb.CidrRange{fcpb.CidrRange_builder{
Address: invalidAddr,
}.Build()},
wantPanicMsg: fmt.Sprintf("bad address: %v", invalidAddr),
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
f := func() {
got := fcpb.CIDRRangesToPrefixes(tc.in)
assert.Equal(t, tc.want, got)
}
if tc.wantPanicMsg != "" {
assert.PanicsWithError(t, tc.wantPanicMsg, f)
} else {
assert.NotPanics(t, f)
}
})
}
}

View File

@@ -0,0 +1,151 @@
package filecacheopb
import (
"fmt"
"net/netip"
"github.com/AdguardTeam/AdGuardDNS/internal/agd"
"github.com/AdguardTeam/AdGuardDNS/internal/agdpasswd"
"github.com/AdguardTeam/AdGuardDNS/internal/agdprotobuf"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb"
"github.com/AdguardTeam/golibs/errors"
)
// devicesFromProtobuf converts protobuf device structures into internal ones.
func devicesFromProtobuf(pbDevices []*fcpb.Device) (devices []*agd.Device, err error) {
devices = make([]*agd.Device, 0, len(pbDevices))
for i, pbDev := range pbDevices {
var dev *agd.Device
dev, err = deviceToInternal(pbDev)
if err != nil {
return nil, fmt.Errorf("device: at index %d: %w", i, err)
}
devices = append(devices, dev)
}
return devices, nil
}
// deviceToInternal converts a protobuf device structure to an internal one.
// pbDev must not be nil.
func deviceToInternal(pbDev *fcpb.Device) (d *agd.Device, err error) {
if pbDev == nil {
panic(fmt.Errorf("pbDev: %w", errors.ErrNoValue))
}
var linkedIP netip.Addr
err = linkedIP.UnmarshalBinary(pbDev.GetLinkedIp())
if err != nil {
return nil, fmt.Errorf("linked ip: %w", err)
}
var dedicatedIPs []netip.Addr
dedicatedIPs, err = agdprotobuf.ByteSlicesToIPs(pbDev.GetDedicatedIps())
if err != nil {
return nil, fmt.Errorf("dedicated ips: %w", err)
}
deviceID := pbDev.GetDeviceId()
auth, err := authToInternal(pbDev.GetAuthentication())
if err != nil {
return nil, fmt.Errorf("auth: %s: %w", deviceID, err)
}
return &agd.Device{
Auth: auth,
// Consider device IDs to have been prevalidated.
ID: agd.DeviceID(deviceID),
LinkedIP: linkedIP,
// Consider device names to have been prevalidated.
Name: agd.DeviceName(pbDev.GetDeviceName()),
// Consider lowercase HumanIDs to have been prevalidated.
HumanIDLower: agd.HumanIDLower(pbDev.GetHumanIdLower()),
DedicatedIPs: dedicatedIPs,
FilteringEnabled: pbDev.GetFilteringEnabled(),
}, nil
}
// authToInternal converts a protobuf auth settings structure to an internal
// one. If pbAuth is nil, toInternal returns non-nil settings with Enabled
// field set to false, otherwise it sets the Enabled field to true.
func authToInternal(pbAuth *fcpb.AuthenticationSettings) (s *agd.AuthSettings, err error) {
if pbAuth == nil {
return &agd.AuthSettings{
Enabled: false,
PasswordHash: agdpasswd.AllowAuthenticator{},
}, nil
}
ph, err := dohPasswordToInternal(pbAuth)
if err != nil {
return nil, fmt.Errorf("password hash: %w", err)
}
return &agd.AuthSettings{
PasswordHash: ph,
Enabled: true,
DoHAuthOnly: pbAuth.GetDohAuthOnly(),
}, nil
}
// dohPasswordToInternal converts a protobuf DoH password hash sum-type to an
// internal one. If pbp is nil, it returns nil.
func dohPasswordToInternal(
pbp *fcpb.AuthenticationSettings,
) (p agdpasswd.Authenticator, err error) {
switch pbp.WhichDohPasswordHash() {
case fcpb.AuthenticationSettings_DohPasswordHash_not_set_case:
return agdpasswd.AllowAuthenticator{}, nil
case fcpb.AuthenticationSettings_PasswordHashBcrypt_case:
return agdpasswd.NewPasswordHashBcrypt(pbp.GetPasswordHashBcrypt()), nil
default:
return nil, fmt.Errorf("bad pb auth doh password hash %T(%[1]v)", pbp)
}
}
// devicesToProtobuf converts a slice of devices to protobuf structures.
func devicesToProtobuf(devices []*agd.Device) (pbDevices []*fcpb.Device) {
pbDevices = make([]*fcpb.Device, 0, len(devices))
for _, d := range devices {
pbD := fcpb.Device_builder{
Authentication: authToProtobuf(d.Auth),
DeviceId: string(d.ID),
LinkedIp: agdprotobuf.IPToBytes(d.LinkedIP),
HumanIdLower: string(d.HumanIDLower),
DeviceName: string(d.Name),
DedicatedIps: agdprotobuf.IPsToByteSlices(d.DedicatedIPs),
FilteringEnabled: d.FilteringEnabled,
}.Build()
pbDevices = append(pbDevices, pbD)
}
return pbDevices
}
// authToProtobuf converts an auth device settings to a protobuf struct.
// Returns nil if the given settings have Enabled field set to false.
func authToProtobuf(s *agd.AuthSettings) (a *fcpb.AuthenticationSettings) {
if s == nil || !s.Enabled {
return nil
}
return fcpb.AuthenticationSettings_builder{
DohAuthOnly: s.DoHAuthOnly,
PasswordHashBcrypt: dohPasswordToProtobuf(s.PasswordHash),
}.Build()
}
// dohPasswordToProtobuf converts an auth password hash sum-type to a protobuf
// one.
func dohPasswordToProtobuf(p agdpasswd.Authenticator) (pbp []byte) {
switch p := p.(type) {
case agdpasswd.AllowAuthenticator:
return nil
case *agdpasswd.PasswordHashBcrypt:
return p.PasswordHash()
default:
panic(fmt.Errorf("bad password hash %T(%[1]v)", p))
}
}

View File

@@ -1,3 +1,51 @@
// Package filecacheopb contains encoding and decoding logic for opaque file
// cache.
package filecacheopb
import (
"fmt"
"log/slog"
"github.com/AdguardTeam/AdGuardDNS/internal/access"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb"
"github.com/c2h5oh/datasize"
"google.golang.org/protobuf/types/known/timestamppb"
)
// toInternal converts the protobuf-encoded data into a cache structure. fc
// baseCustomLogger, and cons must not be nil.
func fileCacheToInternal(
fc *fcpb.FileCache,
baseCustomLogger *slog.Logger,
cons *access.ProfileConstructor,
respSzEst datasize.ByteSize,
) (c *internal.FileCache, err error) {
profiles, err := profilesToInternal(fc.GetProfiles(), baseCustomLogger, cons, respSzEst)
if err != nil {
return nil, fmt.Errorf("converting profiles: %w", err)
}
devices, err := devicesFromProtobuf(fc.GetDevices())
if err != nil {
return nil, fmt.Errorf("converting devices: %w", err)
}
return &internal.FileCache{
SyncTime: fc.GetSyncTime().AsTime(),
Profiles: profiles,
Devices: devices,
Version: fc.GetVersion(),
}, nil
}
// fileCacheToProtobuf converts the cache structure into protobuf structure for
// encoding.
func fileCacheToProtobuf(c *internal.FileCache) (cache *fcpb.FileCache) {
return fcpb.FileCache_builder{
SyncTime: timestamppb.New(c.SyncTime),
Profiles: profilesToProtobuf(c.Profiles),
Devices: devicesToProtobuf(c.Devices),
Version: c.Version,
}.Build()
}

View File

@@ -19,12 +19,14 @@ import (
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
"github.com/c2h5oh/datasize"
"google.golang.org/protobuf/types/known/durationpb"
"google.golang.org/protobuf/types/known/timestamppb"
)
// profilesToInternal converts protobuf profile structures into internal ones.
// baseCustomLogger and cons must not be nil.
//
//lint:ignore U1000 TODO(f.setrakov): Use.
// TODO(f.setrakov): Do not rely on builders and reuse entities.
func profilesToInternal(
pbProfiles []*fcpb.Profile,
baseCustomLogger *slog.Logger,
@@ -170,6 +172,23 @@ func configClientToInternal(
return fltConf, nil
}
// categoryFilterToInternal converts filter config's protobuf category filter
// structure to internal one. If pbCatFlt is nil, returns a disabled config.
func categoryFilterToInternal(
pbCatFlt *fcpb.FilterConfig_CategoryFilter,
) (c *filter.ConfigCategories) {
if pbCatFlt == nil {
return &filter.ConfigCategories{}
}
// Consider the categories to have been prevalidated.
ids := agdprotobuf.UnsafelyConvertStrSlice[string, filter.CategoryID](pbCatFlt.GetIds())
return &filter.ConfigCategories{
IDs: ids,
Enabled: pbCatFlt.GetEnabled(),
}
}
// configParentalToInternal converts filter config's protobuf parental config
// structures to internal ones. pbFltConf must not be nil.
func configParentalToInternal(
@@ -179,6 +198,7 @@ func configParentalToInternal(
parental := pbFltConf.GetParental()
return &filter.ConfigParental{
Categories: categoryFilterToInternal(pbFltConf.GetCategoryFilter()),
PauseSchedule: schedule,
// Consider blocked-service IDs to have been prevalidated.
BlockedServices: agdprotobuf.UnsafelyConvertStrSlice[string, filter.BlockedServiceID](
@@ -470,3 +490,318 @@ func accessToInternal(pbAccess *fcpb.Access, cons *access.ProfileConstructor) (a
StandardEnabled: pbAccess.GetStandardEnabled(),
})
}
// profilesToProtobuf converts a slice of profiles to protobuf structures.
func profilesToProtobuf(profiles []*agd.Profile) (pbProfiles []*fcpb.Profile) {
pbProfiles = make([]*fcpb.Profile, 0, len(profiles))
for i, p := range profiles {
if p == nil {
panic(fmt.Errorf("converting profiles: at index %d: %w", i, errors.ErrNoValue))
}
pbProfiles = append(pbProfiles, profileToProtobuf(p))
}
return pbProfiles
}
// profileToProtobuf converts a profile to protobuf. p must not be nil.
func profileToProtobuf(p *agd.Profile) (pbProf *fcpb.Profile) {
defer func() {
err := errors.FromRecovered(recover())
if err != nil {
// Repanic adding the profile information for easier debugging.
panic(fmt.Errorf("converting profile %q: %w", p.ID, err))
}
}()
pbProfBuilder := &fcpb.Profile_builder{
CustomDomains: customDomainsToProtobuf(p.CustomDomains),
FilterConfig: filterConfigToProtobuf(p.FilterConfig),
Access: accessToProtobuf(p.Access.Config()),
Ratelimiter: ratelimiterToProtobuf(p.Ratelimiter.Config()),
AccountId: string(p.AccountID),
ProfileId: string(p.ID),
DeviceIds: agdprotobuf.UnsafelyConvertStrSlice[agd.DeviceID, string](
p.DeviceIDs.Values(),
),
FilteredResponseTtl: durationpb.New(p.FilteredResponseTTL),
AutoDevicesEnabled: p.AutoDevicesEnabled,
BlockChromePrefetch: p.BlockChromePrefetch,
BlockFirefoxCanary: p.BlockFirefoxCanary,
BlockPrivateRelay: p.BlockPrivateRelay,
Deleted: p.Deleted,
FilteringEnabled: p.FilteringEnabled,
IpLogEnabled: p.IPLogEnabled,
QueryLogEnabled: p.QueryLogEnabled,
}
setBlockingMode(pbProfBuilder, p.BlockingMode)
setAdultBlockingMode(pbProfBuilder, p.AdultBlockingMode)
setSafeBrowsingBlockingMode(pbProfBuilder, p.SafeBrowsingBlockingMode)
return pbProfBuilder.Build()
}
// customDomainsToProtobuf converts the custom-domains configuration to
// protobuf. acd must not be nil.
func customDomainsToProtobuf(acd *agd.AccountCustomDomains) (pbACD *fcpb.AccountCustomDomains) {
return fcpb.AccountCustomDomains_builder{
Domains: customDomainConfigsToProtobuf(acd.Domains),
Enabled: acd.Enabled,
}.Build()
}
// customDomainConfigsToProtobuf converts the configuration of custom-domain
// sets to protobuf.
func customDomainConfigsToProtobuf(
confs []*agd.CustomDomainConfig,
) (pbConfs []*fcpb.CustomDomainConfig) {
l := len(confs)
if l == 0 {
return nil
}
pbConfs = make([]*fcpb.CustomDomainConfig, 0, l)
for i, c := range confs {
conf := fcpb.CustomDomainConfig_builder{
Domains: slices.Clone(c.Domains),
}.Build()
switch s := c.State.(type) {
case *agd.CustomDomainStateCurrent:
curr := fcpb.CustomDomainConfig_StateCurrent_builder{
NotBefore: timestamppb.New(s.NotBefore),
NotAfter: timestamppb.New(s.NotAfter),
CertName: string(s.CertName),
Enabled: s.Enabled,
}.Build()
conf.SetStateCurrent(curr)
case *agd.CustomDomainStatePending:
pend := fcpb.CustomDomainConfig_StatePending_builder{
Expire: timestamppb.New(s.Expire),
WellKnownPath: s.WellKnownPath,
}.Build()
conf.SetStatePending(pend)
default:
panic(fmt.Errorf(
"at index %d: custom domain state: %T(%[2]v): %w",
i,
s,
errors.ErrBadEnumValue,
))
}
pbConfs = append(pbConfs, conf)
}
return pbConfs
}
// filterConfigToProtobuf converts the filtering configuration to protobuf. c
// must not be nil.
func filterConfigToProtobuf(c *filter.ConfigClient) (fc *fcpb.FilterConfig) {
var rules []string
if c.Custom.Enabled {
filterRules := c.Custom.Filter.Rules()
rules = agdprotobuf.UnsafelyConvertStrSlice[filter.RuleText, string](filterRules)
}
custom := fcpb.FilterConfig_Custom_builder{
Rules: rules,
Enabled: c.Custom.Enabled,
}.Build()
parental := fcpb.FilterConfig_Parental_builder{
PauseSchedule: scheduleToProtobuf(c.Parental.PauseSchedule),
BlockedServices: agdprotobuf.UnsafelyConvertStrSlice[filter.BlockedServiceID, string](
c.Parental.BlockedServices,
),
Enabled: c.Parental.Enabled,
AdultBlockingEnabled: c.Parental.AdultBlockingEnabled,
SafeSearchGeneralEnabled: c.Parental.SafeSearchGeneralEnabled,
SafeSearchYoutubeEnabled: c.Parental.SafeSearchYouTubeEnabled,
}.Build()
ruleList := fcpb.FilterConfig_RuleList_builder{
Ids: agdprotobuf.UnsafelyConvertStrSlice[filter.ID, string](c.RuleList.IDs),
Enabled: c.RuleList.Enabled,
}.Build()
safeBrowsing := fcpb.FilterConfig_SafeBrowsing_builder{
Enabled: c.SafeBrowsing.Enabled,
DangerousDomainsEnabled: c.SafeBrowsing.DangerousDomainsEnabled,
NewlyRegisteredDomainsEnabled: c.SafeBrowsing.NewlyRegisteredDomainsEnabled,
}.Build()
categories := fcpb.FilterConfig_CategoryFilter_builder{
Ids: agdprotobuf.UnsafelyConvertStrSlice[
filter.CategoryID,
string,
](c.Parental.Categories.IDs),
Enabled: c.Parental.Categories.Enabled,
}.Build()
return fcpb.FilterConfig_builder{
Custom: custom,
Parental: parental,
RuleList: ruleList,
SafeBrowsing: safeBrowsing,
CategoryFilter: categories,
}.Build()
}
// scheduleToProtobuf converts schedule configuration to protobuf. If c is nil,
// conf is nil.
func scheduleToProtobuf(c *filter.ConfigSchedule) (conf *fcpb.FilterConfig_Schedule) {
if c == nil {
return nil
}
return fcpb.FilterConfig_Schedule_builder{
TimeZone: c.TimeZone.String(),
Week: fcpb.FilterConfig_WeeklySchedule_builder{
Mon: dayIntervalToProtobuf(c.Week[time.Monday]),
Tue: dayIntervalToProtobuf(c.Week[time.Tuesday]),
Wed: dayIntervalToProtobuf(c.Week[time.Wednesday]),
Thu: dayIntervalToProtobuf(c.Week[time.Thursday]),
Fri: dayIntervalToProtobuf(c.Week[time.Friday]),
Sat: dayIntervalToProtobuf(c.Week[time.Saturday]),
Sun: dayIntervalToProtobuf(c.Week[time.Sunday]),
}.Build(),
}.Build()
}
// dayIntervalToProtobuf converts a daily schedule interval to protobuf. If i
// is nil, ivl is nil.
func dayIntervalToProtobuf(i *filter.DayInterval) (ivl *fcpb.DayInterval) {
if i == nil {
return nil
}
return fcpb.DayInterval_builder{
Start: uint32(i.Start),
End: uint32(i.End),
}.Build()
}
// accessToProtobuf converts access settings to protobuf structure. if c is
// nil, ac is nil.
func accessToProtobuf(c *access.ProfileConfig) (ac *fcpb.Access) {
if c == nil {
return nil
}
allowedASNs := agdprotobuf.UnsafelyConvertUint32Slice[geoip.ASN, uint32](c.AllowedASN)
blockedASNs := agdprotobuf.UnsafelyConvertUint32Slice[geoip.ASN, uint32](c.BlockedASN)
return fcpb.Access_builder{
AllowlistAsn: allowedASNs,
AllowlistCidr: prefixesToProtobuf(c.AllowedNets),
BlocklistAsn: blockedASNs,
BlocklistCidr: prefixesToProtobuf(c.BlockedNets),
BlocklistDomainRules: c.BlocklistDomainRules,
StandardEnabled: c.StandardEnabled,
}.Build()
}
// prefixesToProtobuf converts slice of [netip.Prefix] to protobuf structure.
// nets must be valid.
func prefixesToProtobuf(nets []netip.Prefix) (cidrs []*fcpb.CidrRange) {
for _, n := range nets {
cidr := fcpb.CidrRange_builder{
Address: n.Addr().AsSlice(),
// #nosec G115 -- Assume that the prefixes from profiledb are always
// valid.
Prefix: uint32(n.Bits()),
}.Build()
cidrs = append(cidrs, cidr)
}
return cidrs
}
// setBlockingMode populates protobuf profile builder with a blocking-mode
// sum-type.
//
// TODO(d.kolyshev): DRY with setProtobufAdultBlockingMode and
// setProtobufSafeBrowsingBlockingMode.
func setBlockingMode(pb *fcpb.Profile_builder, m dnsmsg.BlockingMode) {
switch m := m.(type) {
case *dnsmsg.BlockingModeCustomIP:
pb.BlockingModeCustomIp = fcpb.BlockingModeCustomIP_builder{
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
}.Build()
case *dnsmsg.BlockingModeNXDOMAIN:
pb.BlockingModeNxdomain = &fcpb.BlockingModeNXDOMAIN{}
case *dnsmsg.BlockingModeNullIP:
pb.BlockingModeNullIp = &fcpb.BlockingModeNullIP{}
case *dnsmsg.BlockingModeREFUSED:
pb.BlockingModeRefused = &fcpb.BlockingModeREFUSED{}
default:
panic(fmt.Errorf("bad blocking mode %T(%[1]v)", m))
}
}
// setAdultBlockingMode populates protobuf profile builder with a blocking-mode
// sum-type.
func setAdultBlockingMode(pb *fcpb.Profile_builder, m dnsmsg.BlockingMode) {
switch m := m.(type) {
case nil:
return
case *dnsmsg.BlockingModeCustomIP:
pb.AdultBlockingModeCustomIp = fcpb.BlockingModeCustomIP_builder{
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
}.Build()
case *dnsmsg.BlockingModeNXDOMAIN:
pb.AdultBlockingModeNxdomain = &fcpb.BlockingModeNXDOMAIN{}
case *dnsmsg.BlockingModeNullIP:
pb.AdultBlockingModeNullIp = &fcpb.BlockingModeNullIP{}
case *dnsmsg.BlockingModeREFUSED:
pb.AdultBlockingModeRefused = &fcpb.BlockingModeREFUSED{}
default:
panic(fmt.Errorf("bad adult blocking mode %T(%[1]v)", m))
}
}
// setSafeBrowsingBlockingMode populates protobuf profile builder with a
// blocking-mode sum-type.
func setSafeBrowsingBlockingMode(pb *fcpb.Profile_builder, m dnsmsg.BlockingMode) {
switch m := m.(type) {
case nil:
return
case *dnsmsg.BlockingModeCustomIP:
pb.SafeBrowsingBlockingModeCustomIp = fcpb.BlockingModeCustomIP_builder{
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
}.Build()
case *dnsmsg.BlockingModeNXDOMAIN:
pb.SafeBrowsingBlockingModeNxdomain = &fcpb.BlockingModeNXDOMAIN{}
case *dnsmsg.BlockingModeNullIP:
pb.SafeBrowsingBlockingModeNullIp = &fcpb.BlockingModeNullIP{}
case *dnsmsg.BlockingModeREFUSED:
pb.SafeBrowsingBlockingModeRefused = &fcpb.BlockingModeREFUSED{}
default:
panic(fmt.Errorf("bad safe browsing blocking mode %T(%[1]v)", m))
}
}
// ratelimiterToProtobuf converts the rate-limit settings to protobuf. if c is
// nil, r is nil.
func ratelimiterToProtobuf(c *agd.RatelimitConfig) (r *fcpb.Ratelimiter) {
if c == nil {
return nil
}
return fcpb.Ratelimiter_builder{
ClientCidr: prefixesToProtobuf(c.ClientSubnets),
Rps: c.RPS,
Enabled: c.Enabled,
}.Build()
}

View File

@@ -0,0 +1,164 @@
package filecacheopb
import (
"net/netip"
"testing"
"github.com/AdguardTeam/AdGuardDNS/internal/dnsmsg"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/profiledbtest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/proto"
)
func TestBlockingModesToInternal(t *testing.T) {
testCases := []struct {
wantMode dnsmsg.BlockingMode
profile *fcpb.Profile
name string
}{{
name: "custom_ips",
wantMode: &dnsmsg.BlockingModeCustomIP{
IPv4: []netip.Addr{profiledbtest.IPv4},
IPv6: []netip.Addr{profiledbtest.IPv6},
},
profile: fcpb.Profile_builder{
BlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
AdultBlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
SafeBrowsingBlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
}.Build(),
}, {
name: "nxdomain",
wantMode: &dnsmsg.BlockingModeNXDOMAIN{},
profile: fcpb.Profile_builder{
BlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
AdultBlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
SafeBrowsingBlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
}.Build(),
}, {
name: "null_ip",
wantMode: &dnsmsg.BlockingModeNullIP{},
profile: fcpb.Profile_builder{
BlockingModeNullIp: &fcpb.BlockingModeNullIP{},
AdultBlockingModeNullIp: &fcpb.BlockingModeNullIP{},
SafeBrowsingBlockingModeNullIp: &fcpb.BlockingModeNullIP{},
}.Build(),
}, {
name: "refused",
wantMode: &dnsmsg.BlockingModeREFUSED{},
profile: fcpb.Profile_builder{
BlockingModeRefused: &fcpb.BlockingModeREFUSED{},
AdultBlockingModeRefused: &fcpb.BlockingModeREFUSED{},
SafeBrowsingBlockingModeRefused: &fcpb.BlockingModeREFUSED{},
}.Build(),
}, {
name: "null_blocking_mode",
wantMode: nil,
profile: fcpb.Profile_builder{}.Build(),
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
bmAdult, err := adultBlockingModeToInternal(tc.profile)
require.NoError(t, err)
assert.Equal(t, tc.wantMode, bmAdult)
bm, err := blockingModeToInternal(tc.profile)
require.NoError(t, err)
assert.Equal(t, tc.wantMode, bm)
bmSafeBrowsing, err := safeBrowsingBlockingModeToInternal(tc.profile)
require.NoError(t, err)
assert.Equal(t, tc.wantMode, bmSafeBrowsing)
})
}
}
func TestSetBlockingModes(t *testing.T) {
testCases := []struct {
mode dnsmsg.BlockingMode
wantProfile *fcpb.Profile
name string
wantPanicMsg string
}{{
name: "custom_ips",
mode: &dnsmsg.BlockingModeCustomIP{
IPv4: []netip.Addr{profiledbtest.IPv4},
IPv6: []netip.Addr{profiledbtest.IPv6},
},
wantProfile: fcpb.Profile_builder{
BlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
AdultBlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
SafeBrowsingBlockingModeCustomIp: fcpb.BlockingModeCustomIP_builder{
Ipv4: [][]byte{profiledbtest.IPv4Bytes},
Ipv6: [][]byte{profiledbtest.IPv6Bytes},
}.Build(),
}.Build(),
}, {
name: "nxdomain",
mode: &dnsmsg.BlockingModeNXDOMAIN{},
wantProfile: fcpb.Profile_builder{
BlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
AdultBlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
SafeBrowsingBlockingModeNxdomain: &fcpb.BlockingModeNXDOMAIN{},
}.Build(),
}, {
name: "null_ip",
mode: &dnsmsg.BlockingModeNullIP{},
wantProfile: fcpb.Profile_builder{
BlockingModeNullIp: &fcpb.BlockingModeNullIP{},
AdultBlockingModeNullIp: &fcpb.BlockingModeNullIP{},
SafeBrowsingBlockingModeNullIp: &fcpb.BlockingModeNullIP{},
}.Build(),
}, {
name: "refused",
mode: &dnsmsg.BlockingModeREFUSED{},
wantProfile: fcpb.Profile_builder{
BlockingModeRefused: &fcpb.BlockingModeREFUSED{},
AdultBlockingModeRefused: &fcpb.BlockingModeREFUSED{},
SafeBrowsingBlockingModeRefused: &fcpb.BlockingModeREFUSED{},
}.Build(),
}, {
name: "null_blocking_mode",
mode: nil,
wantProfile: fcpb.Profile_builder{}.Build(),
wantPanicMsg: "bad blocking mode <nil>(<nil>)",
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
f := func() {
builder := &fcpb.Profile_builder{}
setBlockingMode(builder, tc.mode)
setAdultBlockingMode(builder, tc.mode)
setSafeBrowsingBlockingMode(builder, tc.mode)
got := builder.Build()
assert.True(t, proto.Equal(tc.wantProfile, got))
}
if tc.wantPanicMsg == "" {
assert.NotPanics(t, f)
} else {
assert.PanicsWithError(t, tc.wantPanicMsg, f)
}
})
}
}

View File

@@ -0,0 +1,129 @@
package filecacheopb
import (
"context"
"fmt"
"log/slog"
"os"
"github.com/AdguardTeam/AdGuardDNS/internal/access"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb"
"github.com/AdguardTeam/golibs/errors"
"github.com/c2h5oh/datasize"
renameio "github.com/google/renameio/v2"
"google.golang.org/protobuf/proto"
)
// Storage is the file-cache storage that encodes data using protobuf.
type Storage struct {
logger *slog.Logger
baseCustomLogger *slog.Logger
profAccessCons *access.ProfileConstructor
path string
respSzEst datasize.ByteSize
}
// Config is the configuration structure for the protobuf-encoded file-cache
// storage.
type Config struct {
// Logger is used for logging the operation of profile database. It must
// not be nil.
Logger *slog.Logger
// BaseCustomLogger is the base logger used for the custom filters. It must
// not be nil.
BaseCustomLogger *slog.Logger
// ProfileAccessConstructor is used to create access managers for profiles.
// It must not be nil.
ProfileAccessConstructor *access.ProfileConstructor
// CacheFilePath is the path to the profile cache file. It must be set.
CacheFilePath string
// ResponseSizeEstimate is the estimate of the size of one DNS response for
// the purposes of custom ratelimiting. Responses over this estimate are
// counted as several responses. It must be positive.
ResponseSizeEstimate datasize.ByteSize
}
// New returns a new protobuf-encoded file-cache storage. c must not be nil and
// must be valid.
func New(c *Config) (s *Storage) {
return &Storage{
logger: c.Logger,
baseCustomLogger: c.BaseCustomLogger,
profAccessCons: c.ProfileAccessConstructor,
path: c.CacheFilePath,
respSzEst: c.ResponseSizeEstimate,
}
}
var _ internal.FileCacheStorage = (*Storage)(nil)
// Load implements the [internal.FileCacheStorage] interface for *Storage.
func (s *Storage) Load(ctx context.Context) (c *internal.FileCache, err error) {
s.logger.InfoContext(ctx, "loading")
b, err := os.ReadFile(s.path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
s.logger.WarnContext(ctx, "file not found")
return nil, nil
}
return nil, err
}
fc := &fcpb.FileCache{}
err = proto.Unmarshal(b, fc)
if err != nil {
return nil, fmt.Errorf("decoding protobuf: %w", err)
}
if fc.GetVersion() != internal.FileCacheVersion {
// Do not decode protobuf file contents in case it probably has
// unexpected structure.
return nil, fmt.Errorf(
"%w: version %d is different from %d",
internal.CacheVersionError,
fc.GetVersion(),
internal.FileCacheVersion,
)
}
return fileCacheToInternal(fc, s.baseCustomLogger, s.profAccessCons, s.respSzEst)
}
// Store implements the [internal.FileCacheStorage] interface for *Storage.
func (s *Storage) Store(
ctx context.Context,
c *internal.FileCache,
) (n datasize.ByteSize, err error) {
profNum := len(c.Profiles)
devNum := len(c.Devices)
s.logger.InfoContext(ctx, "saving profiles", "path", s.path, "num", profNum, "devs", devNum)
defer s.logger.InfoContext(
ctx,
"saved profiles",
"path", s.path,
"num", profNum,
"devs", devNum,
)
fc := fileCacheToProtobuf(c)
b, err := proto.Marshal(fc)
if err != nil {
return 0, fmt.Errorf("encoding protobuf: %w", err)
}
err = renameio.WriteFile(s.path, b, 0o600)
if err != nil {
return 0, fmt.Errorf("writing file: %w", err)
}
return datasize.ByteSize(len(b)), nil
}

View File

@@ -0,0 +1,140 @@
package filecacheopb_test
import (
"cmp"
"fmt"
"path/filepath"
"testing"
"time"
"github.com/AdguardTeam/AdGuardDNS/internal/agd"
"github.com/AdguardTeam/AdGuardDNS/internal/agdtest"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/filecacheopb"
"github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/profiledbtest"
"github.com/AdguardTeam/golibs/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// newTestStorage returns new Storage and fills its config with given values. If
// conf is nil, default config will be used.
func newTestStorage(tb testing.TB, conf *filecacheopb.Config) (storage *filecacheopb.Storage) {
tb.Helper()
conf = cmp.Or(conf, &filecacheopb.Config{})
storage = filecacheopb.New(&filecacheopb.Config{
Logger: cmp.Or(conf.Logger, profiledbtest.Logger),
BaseCustomLogger: cmp.Or(conf.BaseCustomLogger, profiledbtest.Logger),
ProfileAccessConstructor: cmp.Or(
conf.ProfileAccessConstructor,
profiledbtest.ProfileAccessConstructor,
),
CacheFilePath: cmp.Or(
conf.CacheFilePath,
filepath.Join(tb.TempDir(), "profiles.pb"),
),
ResponseSizeEstimate: cmp.Or(conf.ResponseSizeEstimate, profiledbtest.RespSzEst),
})
require.NotNil(tb, storage)
return storage
}
func TestStorage(t *testing.T) {
prof, dev := profiledbtest.NewProfile(t)
s := newTestStorage(t, nil)
fc := &internal.FileCache{
SyncTime: time.Now().Round(0).UTC(),
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev},
Version: internal.FileCacheVersion,
}
ctx := profiledbtest.ContextWithTimeout(t)
n, err := s.Store(ctx, fc)
require.NoError(t, err)
assert.Positive(t, n)
gotFC, err := s.Load(ctx)
require.NoError(t, err)
require.NotNil(t, gotFC)
require.NotEmpty(t, *gotFC)
agdtest.AssertEqualProfile(t, fc, gotFC)
}
func TestStorage_Load_noFile(t *testing.T) {
s := newTestStorage(t, nil)
ctx := profiledbtest.ContextWithTimeout(t)
fc, err := s.Load(ctx)
assert.NoError(t, err)
assert.Nil(t, fc)
}
func TestStorage_Load_BadVersion(t *testing.T) {
s := newTestStorage(t, nil)
ctx := profiledbtest.ContextWithTimeout(t)
fc := &internal.FileCache{
Version: 1,
}
n, err := s.Store(ctx, fc)
require.NoError(t, err)
assert.Positive(t, n)
fc, err = s.Load(ctx)
assert.Nil(t, fc)
testutil.AssertErrorMsg(t,
fmt.Sprintf(
"%v: version 1 is different from %d",
internal.CacheVersionError,
internal.FileCacheVersion,
),
err,
)
}
func BenchmarkStorage(b *testing.B) {
prof, dev := profiledbtest.NewProfile(b)
s := newTestStorage(b, nil)
fc := &internal.FileCache{
SyncTime: time.Now().Round(0).UTC(),
Profiles: []*agd.Profile{prof},
Devices: []*agd.Device{dev},
Version: internal.FileCacheVersion,
}
b.Run("store", func(b *testing.B) {
ctx := profiledbtest.ContextWithTimeout(b)
b.ReportAllocs()
for b.Loop() {
_, err := s.Store(ctx, fc)
require.NoError(b, err)
}
})
b.Run("load", func(b *testing.B) {
ctx := profiledbtest.ContextWithTimeout(b)
b.ReportAllocs()
for b.Loop() {
_, err := s.Load(ctx)
require.NoError(b, err)
}
})
// Most recent results:
// goos: darwin
// goarch: arm64
// pkg: github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/filecacheopb
// cpu: Apple M3
// BenchmarkStorage/load-8 34159 35072 ns/op 14280 B/op 164 allocs/op
// BenchmarkStorage/store-8 214 5437664 ns/op 6883 B/op 107 allocs/op
}

View File

@@ -1,7 +1,11 @@
// NOTE: Keep in sync with the fcpb/fc.proto
//
// TODO(f.setrakov): Remove when migration to opaque api will be finished.
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc v6.32.0
// protoc v6.33.1
// source: filecache.proto
package filecachepb
@@ -632,13 +636,14 @@ func (*CustomDomainConfig_StateCurrent_) isCustomDomainConfig_State() {}
func (*CustomDomainConfig_StatePending_) isCustomDomainConfig_State() {}
type FilterConfig struct {
state protoimpl.MessageState `protogen:"open.v1"`
Custom *FilterConfig_Custom `protobuf:"bytes,1,opt,name=custom,proto3" json:"custom,omitempty"`
Parental *FilterConfig_Parental `protobuf:"bytes,2,opt,name=parental,proto3" json:"parental,omitempty"`
RuleList *FilterConfig_RuleList `protobuf:"bytes,3,opt,name=rule_list,json=ruleList,proto3" json:"rule_list,omitempty"`
SafeBrowsing *FilterConfig_SafeBrowsing `protobuf:"bytes,4,opt,name=safe_browsing,json=safeBrowsing,proto3" json:"safe_browsing,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Custom *FilterConfig_Custom `protobuf:"bytes,1,opt,name=custom,proto3" json:"custom,omitempty"`
Parental *FilterConfig_Parental `protobuf:"bytes,2,opt,name=parental,proto3" json:"parental,omitempty"`
RuleList *FilterConfig_RuleList `protobuf:"bytes,3,opt,name=rule_list,json=ruleList,proto3" json:"rule_list,omitempty"`
SafeBrowsing *FilterConfig_SafeBrowsing `protobuf:"bytes,4,opt,name=safe_browsing,json=safeBrowsing,proto3" json:"safe_browsing,omitempty"`
CategoryFilter *FilterConfig_CategoryFilter `protobuf:"bytes,5,opt,name=category_filter,json=categoryFilter,proto3" json:"category_filter,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *FilterConfig) Reset() {
@@ -699,6 +704,13 @@ func (x *FilterConfig) GetSafeBrowsing() *FilterConfig_SafeBrowsing {
return nil
}
func (x *FilterConfig) GetCategoryFilter() *FilterConfig_CategoryFilter {
if x != nil {
return x.CategoryFilter
}
return nil
}
type DayInterval struct {
state protoimpl.MessageState `protogen:"open.v1"`
Start uint32 `protobuf:"varint,1,opt,name=start,proto3" json:"start,omitempty"`
@@ -1785,25 +1797,77 @@ func (x *FilterConfig_SafeBrowsing) GetNewlyRegisteredDomainsEnabled() bool {
return false
}
type FilterConfig_CategoryFilter struct {
state protoimpl.MessageState `protogen:"open.v1"`
Ids []string `protobuf:"bytes,1,rep,name=ids,proto3" json:"ids,omitempty"`
Enabled bool `protobuf:"varint,2,opt,name=enabled,proto3" json:"enabled,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *FilterConfig_CategoryFilter) Reset() {
*x = FilterConfig_CategoryFilter{}
mi := &file_filecache_proto_msgTypes[23]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *FilterConfig_CategoryFilter) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*FilterConfig_CategoryFilter) ProtoMessage() {}
func (x *FilterConfig_CategoryFilter) ProtoReflect() protoreflect.Message {
mi := &file_filecache_proto_msgTypes[23]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use FilterConfig_CategoryFilter.ProtoReflect.Descriptor instead.
func (*FilterConfig_CategoryFilter) Descriptor() ([]byte, []int) {
return file_filecache_proto_rawDescGZIP(), []int{4, 6}
}
func (x *FilterConfig_CategoryFilter) GetIds() []string {
if x != nil {
return x.Ids
}
return nil
}
func (x *FilterConfig_CategoryFilter) GetEnabled() bool {
if x != nil {
return x.Enabled
}
return false
}
var File_filecache_proto protoreflect.FileDescriptor
const file_filecache_proto_rawDesc = "" +
"\n" +
"\x0ffilecache.proto\x12\tprofiledb\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xbb\x01\n" +
"\x0ffilecache.proto\x12\vfilecachepb\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\xbf\x01\n" +
"\tFileCache\x127\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\x12.\n" +
"\bprofiles\x18\x02 \x03(\v2\x12.profiledb.ProfileR\bprofiles\x12+\n" +
"\adevices\x18\x03 \x03(\v2\x11.profiledb.DeviceR\adevices\x12\x18\n" +
"\aversion\x18\x04 \x01(\x05R\aversion\"\xf3\x0f\n" +
"\aProfile\x12F\n" +
"\x0ecustom_domains\x18\x14 \x01(\v2\x1f.profiledb.AccountCustomDomainsR\rcustomDomains\x12<\n" +
"\rfilter_config\x18\x01 \x01(\v2\x17.profiledb.FilterConfigR\ffilterConfig\x12)\n" +
"\x06access\x18\x02 \x01(\v2\x11.profiledb.AccessR\x06access\x12X\n" +
"\x17blocking_mode_custom_ip\x18\x03 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x00R\x14blockingModeCustomIp\x12W\n" +
"\x16blocking_mode_nxdomain\x18\x04 \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x00R\x14blockingModeNxdomain\x12R\n" +
"\x15blocking_mode_null_ip\x18\x05 \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x00R\x12blockingModeNullIp\x12T\n" +
"\x15blocking_mode_refused\x18\x06 \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x00R\x13blockingModeRefused\x128\n" +
"\vratelimiter\x18\a \x01(\v2\x16.profiledb.RatelimiterR\vratelimiter\x12\x1d\n" +
"\tsync_time\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\bsyncTime\x120\n" +
"\bprofiles\x18\x02 \x03(\v2\x14.filecachepb.ProfileR\bprofiles\x12-\n" +
"\adevices\x18\x03 \x03(\v2\x13.filecachepb.DeviceR\adevices\x12\x18\n" +
"\aversion\x18\x04 \x01(\x05R\aversion\"\x93\x10\n" +
"\aProfile\x12H\n" +
"\x0ecustom_domains\x18\x14 \x01(\v2!.filecachepb.AccountCustomDomainsR\rcustomDomains\x12>\n" +
"\rfilter_config\x18\x01 \x01(\v2\x19.filecachepb.FilterConfigR\ffilterConfig\x12+\n" +
"\x06access\x18\x02 \x01(\v2\x13.filecachepb.AccessR\x06access\x12Z\n" +
"\x17blocking_mode_custom_ip\x18\x03 \x01(\v2!.filecachepb.BlockingModeCustomIPH\x00R\x14blockingModeCustomIp\x12Y\n" +
"\x16blocking_mode_nxdomain\x18\x04 \x01(\v2!.filecachepb.BlockingModeNXDOMAINH\x00R\x14blockingModeNxdomain\x12T\n" +
"\x15blocking_mode_null_ip\x18\x05 \x01(\v2\x1f.filecachepb.BlockingModeNullIPH\x00R\x12blockingModeNullIp\x12V\n" +
"\x15blocking_mode_refused\x18\x06 \x01(\v2 .filecachepb.BlockingModeREFUSEDH\x00R\x13blockingModeRefused\x12:\n" +
"\vratelimiter\x18\a \x01(\v2\x18.filecachepb.RatelimiterR\vratelimiter\x12\x1d\n" +
"\n" +
"account_id\x18\x13 \x01(\tR\taccountId\x12\x1d\n" +
"\n" +
@@ -1819,24 +1883,24 @@ const file_filecache_proto_rawDesc = "" +
"\adeleted\x18\x0f \x01(\bR\adeleted\x12+\n" +
"\x11filtering_enabled\x18\x10 \x01(\bR\x10filteringEnabled\x12$\n" +
"\x0eip_log_enabled\x18\x11 \x01(\bR\fipLogEnabled\x12*\n" +
"\x11query_log_enabled\x18\x12 \x01(\bR\x0fqueryLogEnabled\x12c\n" +
"\x1dadult_blocking_mode_custom_ip\x18\x15 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x01R\x19adultBlockingModeCustomIp\x12b\n" +
"\x1cadult_blocking_mode_nxdomain\x18\x16 \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x01R\x19adultBlockingModeNxdomain\x12]\n" +
"\x1badult_blocking_mode_null_ip\x18\x17 \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x01R\x17adultBlockingModeNullIp\x12_\n" +
"\x1badult_blocking_mode_refused\x18\x18 \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x01R\x18adultBlockingModeRefused\x12r\n" +
"%safe_browsing_blocking_mode_custom_ip\x18\x19 \x01(\v2\x1f.profiledb.BlockingModeCustomIPH\x02R safeBrowsingBlockingModeCustomIp\x12q\n" +
"$safe_browsing_blocking_mode_nxdomain\x18\x1a \x01(\v2\x1f.profiledb.BlockingModeNXDOMAINH\x02R safeBrowsingBlockingModeNxdomain\x12l\n" +
"#safe_browsing_blocking_mode_null_ip\x18\x1b \x01(\v2\x1d.profiledb.BlockingModeNullIPH\x02R\x1esafeBrowsingBlockingModeNullIp\x12n\n" +
"#safe_browsing_blocking_mode_refused\x18\x1c \x01(\v2\x1e.profiledb.BlockingModeREFUSEDH\x02R\x1fsafeBrowsingBlockingModeRefusedB\x0f\n" +
"\x11query_log_enabled\x18\x12 \x01(\bR\x0fqueryLogEnabled\x12e\n" +
"\x1dadult_blocking_mode_custom_ip\x18\x15 \x01(\v2!.filecachepb.BlockingModeCustomIPH\x01R\x19adultBlockingModeCustomIp\x12d\n" +
"\x1cadult_blocking_mode_nxdomain\x18\x16 \x01(\v2!.filecachepb.BlockingModeNXDOMAINH\x01R\x19adultBlockingModeNxdomain\x12_\n" +
"\x1badult_blocking_mode_null_ip\x18\x17 \x01(\v2\x1f.filecachepb.BlockingModeNullIPH\x01R\x17adultBlockingModeNullIp\x12a\n" +
"\x1badult_blocking_mode_refused\x18\x18 \x01(\v2 .filecachepb.BlockingModeREFUSEDH\x01R\x18adultBlockingModeRefused\x12t\n" +
"%safe_browsing_blocking_mode_custom_ip\x18\x19 \x01(\v2!.filecachepb.BlockingModeCustomIPH\x02R safeBrowsingBlockingModeCustomIp\x12s\n" +
"$safe_browsing_blocking_mode_nxdomain\x18\x1a \x01(\v2!.filecachepb.BlockingModeNXDOMAINH\x02R safeBrowsingBlockingModeNxdomain\x12n\n" +
"#safe_browsing_blocking_mode_null_ip\x18\x1b \x01(\v2\x1f.filecachepb.BlockingModeNullIPH\x02R\x1esafeBrowsingBlockingModeNullIp\x12p\n" +
"#safe_browsing_blocking_mode_refused\x18\x1c \x01(\v2 .filecachepb.BlockingModeREFUSEDH\x02R\x1fsafeBrowsingBlockingModeRefusedB\x0f\n" +
"\rblocking_modeB\x15\n" +
"\x13adult_blocking_modeB\x1d\n" +
"\x1bsafe_browsing_blocking_mode\"i\n" +
"\x14AccountCustomDomains\x127\n" +
"\adomains\x18\x01 \x03(\v2\x1d.profiledb.CustomDomainConfigR\adomains\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"\x85\x04\n" +
"\x12CustomDomainConfig\x12Q\n" +
"\rstate_current\x18\x01 \x01(\v2*.profiledb.CustomDomainConfig.StateCurrentH\x00R\fstateCurrent\x12Q\n" +
"\rstate_pending\x18\x02 \x01(\v2*.profiledb.CustomDomainConfig.StatePendingH\x00R\fstatePending\x12\x18\n" +
"\x1bsafe_browsing_blocking_mode\"k\n" +
"\x14AccountCustomDomains\x129\n" +
"\adomains\x18\x01 \x03(\v2\x1f.filecachepb.CustomDomainConfigR\adomains\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"\x89\x04\n" +
"\x12CustomDomainConfig\x12S\n" +
"\rstate_current\x18\x01 \x01(\v2,.filecachepb.CustomDomainConfig.StateCurrentH\x00R\fstateCurrent\x12S\n" +
"\rstate_pending\x18\x02 \x01(\v2,.filecachepb.CustomDomainConfig.StatePendingH\x00R\fstatePending\x12\x18\n" +
"\adomains\x18\x03 \x03(\tR\adomains\x1a\xb9\x01\n" +
"\fStateCurrent\x129\n" +
"\n" +
@@ -1847,41 +1911,44 @@ const file_filecache_proto_rawDesc = "" +
"\fStatePending\x122\n" +
"\x06expire\x18\x01 \x01(\v2\x1a.google.protobuf.TimestampR\x06expire\x12&\n" +
"\x0fwell_known_path\x18\x02 \x01(\tR\rwellKnownPathB\a\n" +
"\x05state\"\xa9\n" +
"\n" +
"\fFilterConfig\x126\n" +
"\x06custom\x18\x01 \x01(\v2\x1e.profiledb.FilterConfig.CustomR\x06custom\x12<\n" +
"\bparental\x18\x02 \x01(\v2 .profiledb.FilterConfig.ParentalR\bparental\x12=\n" +
"\trule_list\x18\x03 \x01(\v2 .profiledb.FilterConfig.RuleListR\bruleList\x12I\n" +
"\rsafe_browsing\x18\x04 \x01(\v2$.profiledb.FilterConfig.SafeBrowsingR\fsafeBrowsing\x1aD\n" +
"\x05state\"\xd4\v\n" +
"\fFilterConfig\x128\n" +
"\x06custom\x18\x01 \x01(\v2 .filecachepb.FilterConfig.CustomR\x06custom\x12>\n" +
"\bparental\x18\x02 \x01(\v2\".filecachepb.FilterConfig.ParentalR\bparental\x12?\n" +
"\trule_list\x18\x03 \x01(\v2\".filecachepb.FilterConfig.RuleListR\bruleList\x12K\n" +
"\rsafe_browsing\x18\x04 \x01(\v2&.filecachepb.FilterConfig.SafeBrowsingR\fsafeBrowsing\x12Q\n" +
"\x0fcategory_filter\x18\x05 \x01(\v2(.filecachepb.FilterConfig.CategoryFilterR\x0ecategoryFilter\x1aD\n" +
"\x06Custom\x12\x14\n" +
"\x05rules\x18\x03 \x03(\tR\x05rules\x12\x18\n" +
"\aenabled\x18\x04 \x01(\bR\aenabledJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03\x1a\xcc\x02\n" +
"\bParental\x12G\n" +
"\x0epause_schedule\x18\x01 \x01(\v2 .profiledb.FilterConfig.ScheduleR\rpauseSchedule\x12)\n" +
"\aenabled\x18\x04 \x01(\bR\aenabledJ\x04\b\x01\x10\x02J\x04\b\x02\x10\x03\x1a\xce\x02\n" +
"\bParental\x12I\n" +
"\x0epause_schedule\x18\x01 \x01(\v2\".filecachepb.FilterConfig.ScheduleR\rpauseSchedule\x12)\n" +
"\x10blocked_services\x18\x02 \x03(\tR\x0fblockedServices\x12\x18\n" +
"\aenabled\x18\x03 \x01(\bR\aenabled\x124\n" +
"\x16adult_blocking_enabled\x18\x04 \x01(\bR\x14adultBlockingEnabled\x12=\n" +
"\x1bsafe_search_general_enabled\x18\x05 \x01(\bR\x18safeSearchGeneralEnabled\x12=\n" +
"\x1bsafe_search_youtube_enabled\x18\x06 \x01(\bR\x18safeSearchYoutubeEnabled\x1ac\n" +
"\bSchedule\x12:\n" +
"\x04week\x18\x01 \x01(\v2&.profiledb.FilterConfig.WeeklyScheduleR\x04week\x12\x1b\n" +
"\ttime_zone\x18\x02 \x01(\tR\btimeZone\x1a\xb6\x02\n" +
"\x0eWeeklySchedule\x12(\n" +
"\x03mon\x18\x01 \x01(\v2\x16.profiledb.DayIntervalR\x03mon\x12(\n" +
"\x03tue\x18\x02 \x01(\v2\x16.profiledb.DayIntervalR\x03tue\x12(\n" +
"\x03wed\x18\x03 \x01(\v2\x16.profiledb.DayIntervalR\x03wed\x12(\n" +
"\x03thu\x18\x04 \x01(\v2\x16.profiledb.DayIntervalR\x03thu\x12(\n" +
"\x03fri\x18\x05 \x01(\v2\x16.profiledb.DayIntervalR\x03fri\x12(\n" +
"\x03sat\x18\x06 \x01(\v2\x16.profiledb.DayIntervalR\x03sat\x12(\n" +
"\x03sun\x18\a \x01(\v2\x16.profiledb.DayIntervalR\x03sun\x1a6\n" +
"\x1bsafe_search_youtube_enabled\x18\x06 \x01(\bR\x18safeSearchYoutubeEnabled\x1ae\n" +
"\bSchedule\x12<\n" +
"\x04week\x18\x01 \x01(\v2(.filecachepb.FilterConfig.WeeklyScheduleR\x04week\x12\x1b\n" +
"\ttime_zone\x18\x02 \x01(\tR\btimeZone\x1a\xc4\x02\n" +
"\x0eWeeklySchedule\x12*\n" +
"\x03mon\x18\x01 \x01(\v2\x18.filecachepb.DayIntervalR\x03mon\x12*\n" +
"\x03tue\x18\x02 \x01(\v2\x18.filecachepb.DayIntervalR\x03tue\x12*\n" +
"\x03wed\x18\x03 \x01(\v2\x18.filecachepb.DayIntervalR\x03wed\x12*\n" +
"\x03thu\x18\x04 \x01(\v2\x18.filecachepb.DayIntervalR\x03thu\x12*\n" +
"\x03fri\x18\x05 \x01(\v2\x18.filecachepb.DayIntervalR\x03fri\x12*\n" +
"\x03sat\x18\x06 \x01(\v2\x18.filecachepb.DayIntervalR\x03sat\x12*\n" +
"\x03sun\x18\a \x01(\v2\x18.filecachepb.DayIntervalR\x03sun\x1a6\n" +
"\bRuleList\x12\x10\n" +
"\x03ids\x18\x01 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\x1a\xad\x01\n" +
"\fSafeBrowsing\x12\x18\n" +
"\aenabled\x18\x01 \x01(\bR\aenabled\x12:\n" +
"\x19dangerous_domains_enabled\x18\x02 \x01(\bR\x17dangerousDomainsEnabled\x12G\n" +
" newly_registered_domains_enabled\x18\x03 \x01(\bR\x1dnewlyRegisteredDomainsEnabled\"5\n" +
" newly_registered_domains_enabled\x18\x03 \x01(\bR\x1dnewlyRegisteredDomainsEnabled\x1a<\n" +
"\x0eCategoryFilter\x12\x10\n" +
"\x03ids\x18\x01 \x03(\tR\x03ids\x12\x18\n" +
"\aenabled\x18\x02 \x01(\bR\aenabled\"5\n" +
"\vDayInterval\x12\x14\n" +
"\x05start\x18\x01 \x01(\rR\x05start\x12\x10\n" +
"\x03end\x18\x02 \x01(\rR\x03end\">\n" +
@@ -1890,21 +1957,21 @@ const file_filecache_proto_rawDesc = "" +
"\x04ipv6\x18\x02 \x03(\fR\x04ipv6\"\x16\n" +
"\x14BlockingModeNXDOMAIN\"\x14\n" +
"\x12BlockingModeNullIP\"\x15\n" +
"\x13BlockingModeREFUSED\"\xa6\x02\n" +
"\x06Device\x12I\n" +
"\x0eauthentication\x18\x06 \x01(\v2!.profiledb.AuthenticationSettingsR\x0eauthentication\x12\x1b\n" +
"\x13BlockingModeREFUSED\"\xa8\x02\n" +
"\x06Device\x12K\n" +
"\x0eauthentication\x18\x06 \x01(\v2#.filecachepb.AuthenticationSettingsR\x0eauthentication\x12\x1b\n" +
"\tdevice_id\x18\x01 \x01(\tR\bdeviceId\x12\x1f\n" +
"\vdevice_name\x18\x03 \x01(\tR\n" +
"deviceName\x12$\n" +
"\x0ehuman_id_lower\x18\a \x01(\tR\fhumanIdLower\x12\x1b\n" +
"\tlinked_ip\x18\x02 \x01(\fR\blinkedIp\x12#\n" +
"\rdedicated_ips\x18\x04 \x03(\fR\fdedicatedIps\x12+\n" +
"\x11filtering_enabled\x18\x05 \x01(\bR\x10filteringEnabled\"\xad\x02\n" +
"\x11filtering_enabled\x18\x05 \x01(\bR\x10filteringEnabled\"\xb1\x02\n" +
"\x06Access\x12#\n" +
"\rallowlist_asn\x18\x04 \x03(\rR\fallowlistAsn\x12;\n" +
"\x0eallowlist_cidr\x18\x01 \x03(\v2\x14.profiledb.CidrRangeR\rallowlistCidr\x12#\n" +
"\rblocklist_asn\x18\x05 \x03(\rR\fblocklistAsn\x12;\n" +
"\x0eblocklist_cidr\x18\x02 \x03(\v2\x14.profiledb.CidrRangeR\rblocklistCidr\x124\n" +
"\rallowlist_asn\x18\x04 \x03(\rR\fallowlistAsn\x12=\n" +
"\x0eallowlist_cidr\x18\x01 \x03(\v2\x16.filecachepb.CidrRangeR\rallowlistCidr\x12#\n" +
"\rblocklist_asn\x18\x05 \x03(\rR\fblocklistAsn\x12=\n" +
"\x0eblocklist_cidr\x18\x02 \x03(\v2\x16.filecachepb.CidrRangeR\rblocklistCidr\x124\n" +
"\x16blocklist_domain_rules\x18\x03 \x03(\tR\x14blocklistDomainRules\x12)\n" +
"\x10standard_enabled\x18\x06 \x01(\bR\x0fstandardEnabled\"=\n" +
"\tCidrRange\x12\x18\n" +
@@ -1913,12 +1980,12 @@ const file_filecache_proto_rawDesc = "" +
"\x16AuthenticationSettings\x12\"\n" +
"\rdoh_auth_only\x18\x01 \x01(\bR\vdohAuthOnly\x122\n" +
"\x14password_hash_bcrypt\x18\x02 \x01(\fH\x00R\x12passwordHashBcryptB\x13\n" +
"\x11doh_password_hash\"p\n" +
"\vRatelimiter\x125\n" +
"\vclient_cidr\x18\x01 \x03(\v2\x14.profiledb.CidrRangeR\n" +
"\x11doh_password_hash\"r\n" +
"\vRatelimiter\x127\n" +
"\vclient_cidr\x18\x01 \x03(\v2\x16.filecachepb.CidrRangeR\n" +
"clientCidr\x12\x10\n" +
"\x03rps\x18\x02 \x01(\rR\x03rps\x12\x18\n" +
"\aenabled\x18\x03 \x01(\bR\aenabledB\x0fZ\r./filecachepbb\x06proto3"
"\aenabled\x18\x03 \x01(\bR\aenabledb\x06proto3"
var (
file_filecache_proto_rawDescOnce sync.Once
@@ -1932,83 +1999,85 @@ func file_filecache_proto_rawDescGZIP() []byte {
return file_filecache_proto_rawDescData
}
var file_filecache_proto_msgTypes = make([]protoimpl.MessageInfo, 23)
var file_filecache_proto_msgTypes = make([]protoimpl.MessageInfo, 24)
var file_filecache_proto_goTypes = []any{
(*FileCache)(nil), // 0: profiledb.FileCache
(*Profile)(nil), // 1: profiledb.Profile
(*AccountCustomDomains)(nil), // 2: profiledb.AccountCustomDomains
(*CustomDomainConfig)(nil), // 3: profiledb.CustomDomainConfig
(*FilterConfig)(nil), // 4: profiledb.FilterConfig
(*DayInterval)(nil), // 5: profiledb.DayInterval
(*BlockingModeCustomIP)(nil), // 6: profiledb.BlockingModeCustomIP
(*BlockingModeNXDOMAIN)(nil), // 7: profiledb.BlockingModeNXDOMAIN
(*BlockingModeNullIP)(nil), // 8: profiledb.BlockingModeNullIP
(*BlockingModeREFUSED)(nil), // 9: profiledb.BlockingModeREFUSED
(*Device)(nil), // 10: profiledb.Device
(*Access)(nil), // 11: profiledb.Access
(*CidrRange)(nil), // 12: profiledb.CidrRange
(*AuthenticationSettings)(nil), // 13: profiledb.AuthenticationSettings
(*Ratelimiter)(nil), // 14: profiledb.Ratelimiter
(*CustomDomainConfig_StateCurrent)(nil), // 15: profiledb.CustomDomainConfig.StateCurrent
(*CustomDomainConfig_StatePending)(nil), // 16: profiledb.CustomDomainConfig.StatePending
(*FilterConfig_Custom)(nil), // 17: profiledb.FilterConfig.Custom
(*FilterConfig_Parental)(nil), // 18: profiledb.FilterConfig.Parental
(*FilterConfig_Schedule)(nil), // 19: profiledb.FilterConfig.Schedule
(*FilterConfig_WeeklySchedule)(nil), // 20: profiledb.FilterConfig.WeeklySchedule
(*FilterConfig_RuleList)(nil), // 21: profiledb.FilterConfig.RuleList
(*FilterConfig_SafeBrowsing)(nil), // 22: profiledb.FilterConfig.SafeBrowsing
(*timestamppb.Timestamp)(nil), // 23: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 24: google.protobuf.Duration
(*FileCache)(nil), // 0: filecachepb.FileCache
(*Profile)(nil), // 1: filecachepb.Profile
(*AccountCustomDomains)(nil), // 2: filecachepb.AccountCustomDomains
(*CustomDomainConfig)(nil), // 3: filecachepb.CustomDomainConfig
(*FilterConfig)(nil), // 4: filecachepb.FilterConfig
(*DayInterval)(nil), // 5: filecachepb.DayInterval
(*BlockingModeCustomIP)(nil), // 6: filecachepb.BlockingModeCustomIP
(*BlockingModeNXDOMAIN)(nil), // 7: filecachepb.BlockingModeNXDOMAIN
(*BlockingModeNullIP)(nil), // 8: filecachepb.BlockingModeNullIP
(*BlockingModeREFUSED)(nil), // 9: filecachepb.BlockingModeREFUSED
(*Device)(nil), // 10: filecachepb.Device
(*Access)(nil), // 11: filecachepb.Access
(*CidrRange)(nil), // 12: filecachepb.CidrRange
(*AuthenticationSettings)(nil), // 13: filecachepb.AuthenticationSettings
(*Ratelimiter)(nil), // 14: filecachepb.Ratelimiter
(*CustomDomainConfig_StateCurrent)(nil), // 15: filecachepb.CustomDomainConfig.StateCurrent
(*CustomDomainConfig_StatePending)(nil), // 16: filecachepb.CustomDomainConfig.StatePending
(*FilterConfig_Custom)(nil), // 17: filecachepb.FilterConfig.Custom
(*FilterConfig_Parental)(nil), // 18: filecachepb.FilterConfig.Parental
(*FilterConfig_Schedule)(nil), // 19: filecachepb.FilterConfig.Schedule
(*FilterConfig_WeeklySchedule)(nil), // 20: filecachepb.FilterConfig.WeeklySchedule
(*FilterConfig_RuleList)(nil), // 21: filecachepb.FilterConfig.RuleList
(*FilterConfig_SafeBrowsing)(nil), // 22: filecachepb.FilterConfig.SafeBrowsing
(*FilterConfig_CategoryFilter)(nil), // 23: filecachepb.FilterConfig.CategoryFilter
(*timestamppb.Timestamp)(nil), // 24: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 25: google.protobuf.Duration
}
var file_filecache_proto_depIdxs = []int32{
23, // 0: profiledb.FileCache.sync_time:type_name -> google.protobuf.Timestamp
1, // 1: profiledb.FileCache.profiles:type_name -> profiledb.Profile
10, // 2: profiledb.FileCache.devices:type_name -> profiledb.Device
2, // 3: profiledb.Profile.custom_domains:type_name -> profiledb.AccountCustomDomains
4, // 4: profiledb.Profile.filter_config:type_name -> profiledb.FilterConfig
11, // 5: profiledb.Profile.access:type_name -> profiledb.Access
6, // 6: profiledb.Profile.blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 7: profiledb.Profile.blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 8: profiledb.Profile.blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 9: profiledb.Profile.blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
14, // 10: profiledb.Profile.ratelimiter:type_name -> profiledb.Ratelimiter
24, // 11: profiledb.Profile.filtered_response_ttl:type_name -> google.protobuf.Duration
6, // 12: profiledb.Profile.adult_blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 13: profiledb.Profile.adult_blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 14: profiledb.Profile.adult_blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 15: profiledb.Profile.adult_blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
6, // 16: profiledb.Profile.safe_browsing_blocking_mode_custom_ip:type_name -> profiledb.BlockingModeCustomIP
7, // 17: profiledb.Profile.safe_browsing_blocking_mode_nxdomain:type_name -> profiledb.BlockingModeNXDOMAIN
8, // 18: profiledb.Profile.safe_browsing_blocking_mode_null_ip:type_name -> profiledb.BlockingModeNullIP
9, // 19: profiledb.Profile.safe_browsing_blocking_mode_refused:type_name -> profiledb.BlockingModeREFUSED
3, // 20: profiledb.AccountCustomDomains.domains:type_name -> profiledb.CustomDomainConfig
15, // 21: profiledb.CustomDomainConfig.state_current:type_name -> profiledb.CustomDomainConfig.StateCurrent
16, // 22: profiledb.CustomDomainConfig.state_pending:type_name -> profiledb.CustomDomainConfig.StatePending
17, // 23: profiledb.FilterConfig.custom:type_name -> profiledb.FilterConfig.Custom
18, // 24: profiledb.FilterConfig.parental:type_name -> profiledb.FilterConfig.Parental
21, // 25: profiledb.FilterConfig.rule_list:type_name -> profiledb.FilterConfig.RuleList
22, // 26: profiledb.FilterConfig.safe_browsing:type_name -> profiledb.FilterConfig.SafeBrowsing
13, // 27: profiledb.Device.authentication:type_name -> profiledb.AuthenticationSettings
12, // 28: profiledb.Access.allowlist_cidr:type_name -> profiledb.CidrRange
12, // 29: profiledb.Access.blocklist_cidr:type_name -> profiledb.CidrRange
12, // 30: profiledb.Ratelimiter.client_cidr:type_name -> profiledb.CidrRange
23, // 31: profiledb.CustomDomainConfig.StateCurrent.not_before:type_name -> google.protobuf.Timestamp
23, // 32: profiledb.CustomDomainConfig.StateCurrent.not_after:type_name -> google.protobuf.Timestamp
23, // 33: profiledb.CustomDomainConfig.StatePending.expire:type_name -> google.protobuf.Timestamp
19, // 34: profiledb.FilterConfig.Parental.pause_schedule:type_name -> profiledb.FilterConfig.Schedule
20, // 35: profiledb.FilterConfig.Schedule.week:type_name -> profiledb.FilterConfig.WeeklySchedule
5, // 36: profiledb.FilterConfig.WeeklySchedule.mon:type_name -> profiledb.DayInterval
5, // 37: profiledb.FilterConfig.WeeklySchedule.tue:type_name -> profiledb.DayInterval
5, // 38: profiledb.FilterConfig.WeeklySchedule.wed:type_name -> profiledb.DayInterval
5, // 39: profiledb.FilterConfig.WeeklySchedule.thu:type_name -> profiledb.DayInterval
5, // 40: profiledb.FilterConfig.WeeklySchedule.fri:type_name -> profiledb.DayInterval
5, // 41: profiledb.FilterConfig.WeeklySchedule.sat:type_name -> profiledb.DayInterval
5, // 42: profiledb.FilterConfig.WeeklySchedule.sun:type_name -> profiledb.DayInterval
43, // [43:43] is the sub-list for method output_type
43, // [43:43] is the sub-list for method input_type
43, // [43:43] is the sub-list for extension type_name
43, // [43:43] is the sub-list for extension extendee
0, // [0:43] is the sub-list for field type_name
24, // 0: filecachepb.FileCache.sync_time:type_name -> google.protobuf.Timestamp
1, // 1: filecachepb.FileCache.profiles:type_name -> filecachepb.Profile
10, // 2: filecachepb.FileCache.devices:type_name -> filecachepb.Device
2, // 3: filecachepb.Profile.custom_domains:type_name -> filecachepb.AccountCustomDomains
4, // 4: filecachepb.Profile.filter_config:type_name -> filecachepb.FilterConfig
11, // 5: filecachepb.Profile.access:type_name -> filecachepb.Access
6, // 6: filecachepb.Profile.blocking_mode_custom_ip:type_name -> filecachepb.BlockingModeCustomIP
7, // 7: filecachepb.Profile.blocking_mode_nxdomain:type_name -> filecachepb.BlockingModeNXDOMAIN
8, // 8: filecachepb.Profile.blocking_mode_null_ip:type_name -> filecachepb.BlockingModeNullIP
9, // 9: filecachepb.Profile.blocking_mode_refused:type_name -> filecachepb.BlockingModeREFUSED
14, // 10: filecachepb.Profile.ratelimiter:type_name -> filecachepb.Ratelimiter
25, // 11: filecachepb.Profile.filtered_response_ttl:type_name -> google.protobuf.Duration
6, // 12: filecachepb.Profile.adult_blocking_mode_custom_ip:type_name -> filecachepb.BlockingModeCustomIP
7, // 13: filecachepb.Profile.adult_blocking_mode_nxdomain:type_name -> filecachepb.BlockingModeNXDOMAIN
8, // 14: filecachepb.Profile.adult_blocking_mode_null_ip:type_name -> filecachepb.BlockingModeNullIP
9, // 15: filecachepb.Profile.adult_blocking_mode_refused:type_name -> filecachepb.BlockingModeREFUSED
6, // 16: filecachepb.Profile.safe_browsing_blocking_mode_custom_ip:type_name -> filecachepb.BlockingModeCustomIP
7, // 17: filecachepb.Profile.safe_browsing_blocking_mode_nxdomain:type_name -> filecachepb.BlockingModeNXDOMAIN
8, // 18: filecachepb.Profile.safe_browsing_blocking_mode_null_ip:type_name -> filecachepb.BlockingModeNullIP
9, // 19: filecachepb.Profile.safe_browsing_blocking_mode_refused:type_name -> filecachepb.BlockingModeREFUSED
3, // 20: filecachepb.AccountCustomDomains.domains:type_name -> filecachepb.CustomDomainConfig
15, // 21: filecachepb.CustomDomainConfig.state_current:type_name -> filecachepb.CustomDomainConfig.StateCurrent
16, // 22: filecachepb.CustomDomainConfig.state_pending:type_name -> filecachepb.CustomDomainConfig.StatePending
17, // 23: filecachepb.FilterConfig.custom:type_name -> filecachepb.FilterConfig.Custom
18, // 24: filecachepb.FilterConfig.parental:type_name -> filecachepb.FilterConfig.Parental
21, // 25: filecachepb.FilterConfig.rule_list:type_name -> filecachepb.FilterConfig.RuleList
22, // 26: filecachepb.FilterConfig.safe_browsing:type_name -> filecachepb.FilterConfig.SafeBrowsing
23, // 27: filecachepb.FilterConfig.category_filter:type_name -> filecachepb.FilterConfig.CategoryFilter
13, // 28: filecachepb.Device.authentication:type_name -> filecachepb.AuthenticationSettings
12, // 29: filecachepb.Access.allowlist_cidr:type_name -> filecachepb.CidrRange
12, // 30: filecachepb.Access.blocklist_cidr:type_name -> filecachepb.CidrRange
12, // 31: filecachepb.Ratelimiter.client_cidr:type_name -> filecachepb.CidrRange
24, // 32: filecachepb.CustomDomainConfig.StateCurrent.not_before:type_name -> google.protobuf.Timestamp
24, // 33: filecachepb.CustomDomainConfig.StateCurrent.not_after:type_name -> google.protobuf.Timestamp
24, // 34: filecachepb.CustomDomainConfig.StatePending.expire:type_name -> google.protobuf.Timestamp
19, // 35: filecachepb.FilterConfig.Parental.pause_schedule:type_name -> filecachepb.FilterConfig.Schedule
20, // 36: filecachepb.FilterConfig.Schedule.week:type_name -> filecachepb.FilterConfig.WeeklySchedule
5, // 37: filecachepb.FilterConfig.WeeklySchedule.mon:type_name -> filecachepb.DayInterval
5, // 38: filecachepb.FilterConfig.WeeklySchedule.tue:type_name -> filecachepb.DayInterval
5, // 39: filecachepb.FilterConfig.WeeklySchedule.wed:type_name -> filecachepb.DayInterval
5, // 40: filecachepb.FilterConfig.WeeklySchedule.thu:type_name -> filecachepb.DayInterval
5, // 41: filecachepb.FilterConfig.WeeklySchedule.fri:type_name -> filecachepb.DayInterval
5, // 42: filecachepb.FilterConfig.WeeklySchedule.sat:type_name -> filecachepb.DayInterval
5, // 43: filecachepb.FilterConfig.WeeklySchedule.sun:type_name -> filecachepb.DayInterval
44, // [44:44] is the sub-list for method output_type
44, // [44:44] is the sub-list for method input_type
44, // [44:44] is the sub-list for extension type_name
44, // [44:44] is the sub-list for extension extendee
0, // [0:44] is the sub-list for field type_name
}
func init() { file_filecache_proto_init() }
@@ -2043,7 +2112,7 @@ func file_filecache_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_filecache_proto_rawDesc), len(file_filecache_proto_rawDesc)),
NumEnums: 0,
NumMessages: 23,
NumMessages: 24,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -1,6 +1,9 @@
// NOTE: Keep in sync with the fcpb/fc.proto
//
// TODO(f.setrakov): Remove when migration to opaque api will be finished.
syntax = "proto3";
package profiledb;
package filecachepb;
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
@@ -126,10 +129,16 @@ message FilterConfig {
bool newly_registered_domains_enabled = 3;
}
message CategoryFilter {
repeated string ids = 1;
bool enabled = 2;
}
Custom custom = 1;
Parental parental = 2;
RuleList rule_list = 3;
SafeBrowsing safe_browsing = 4;
CategoryFilter category_filter = 5;
}
message DayInterval {

View File

@@ -138,6 +138,7 @@ func (x *Profile) toInternal(
Enabled: pbFltConf.Custom.Enabled,
},
Parental: &filter.ConfigParental{
Categories: categoryFilterToInternal(pbFltConf.CategoryFilter),
PauseSchedule: schedule,
// Consider blocked-service IDs to have been prevalidated.
BlockedServices: agdprotobuf.UnsafelyConvertStrSlice[string, filter.BlockedServiceID](
@@ -192,6 +193,25 @@ func (x *Profile) toInternal(
}, nil
}
// categoryFilterToInternal converts filter config's protobuf category filter
// structure to internal one. If pbCatFlt is nil, returns a disabled config.
func categoryFilterToInternal(
pbCatFlt *FilterConfig_CategoryFilter,
) (c *filter.ConfigCategories) {
if pbCatFlt == nil {
return &filter.ConfigCategories{
Enabled: false,
}
}
// Consider the categories to have been prevalidated.
ids := agdprotobuf.UnsafelyConvertStrSlice[string, filter.CategoryID](pbCatFlt.GetIds())
return &filter.ConfigCategories{
IDs: ids,
Enabled: pbCatFlt.GetEnabled(),
}
}
// toInternal converts a protobuf protection-schedule structure to an internal
// one. If x is nil, c is nil.
func (x *FilterConfig_Schedule) toInternal() (c *filter.ConfigSchedule, err error) {
@@ -473,7 +493,7 @@ func dohPasswordToInternal(
) (p agdpasswd.Authenticator, err error) {
switch pbp := pbp.(type) {
case nil:
return nil, nil
return agdpasswd.AllowAuthenticator{}, nil
case *AuthenticationSettings_PasswordHashBcrypt:
return agdpasswd.NewPasswordHashBcrypt(pbp.PasswordHashBcrypt), nil
default:
@@ -657,6 +677,14 @@ func filterConfigToProtobuf(c *filter.ConfigClient) (fc *FilterConfig) {
}
return &FilterConfig{
// TODO(a.garipov): Move to parental.
CategoryFilter: &FilterConfig_CategoryFilter{
Enabled: c.Parental.Categories.Enabled,
Ids: agdprotobuf.UnsafelyConvertStrSlice[
filter.CategoryID,
string,
](c.Parental.Categories.IDs),
},
Custom: &FilterConfig_Custom{
Rules: rules,
Enabled: c.Custom.Enabled,
@@ -767,8 +795,8 @@ func blockingModeToProtobuf(m dnsmsg.BlockingMode) (pbBlockingMode isProfile_Blo
case *dnsmsg.BlockingModeCustomIP:
return &Profile_BlockingModeCustomIp{
BlockingModeCustomIp: &BlockingModeCustomIP{
Ipv4: ipsToByteSlices(m.IPv4),
Ipv6: ipsToByteSlices(m.IPv6),
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
},
}
case *dnsmsg.BlockingModeNXDOMAIN:
@@ -799,8 +827,8 @@ func adultBlockingModeToProtobuf(
case *dnsmsg.BlockingModeCustomIP:
return &Profile_AdultBlockingModeCustomIp{
AdultBlockingModeCustomIp: &BlockingModeCustomIP{
Ipv4: ipsToByteSlices(m.IPv4),
Ipv6: ipsToByteSlices(m.IPv6),
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
},
}
case *dnsmsg.BlockingModeNXDOMAIN:
@@ -831,8 +859,8 @@ func safeBrowsingBlockingModeToProtobuf(
case *dnsmsg.BlockingModeCustomIP:
return &Profile_SafeBrowsingBlockingModeCustomIp{
SafeBrowsingBlockingModeCustomIp: &BlockingModeCustomIP{
Ipv4: ipsToByteSlices(m.IPv4),
Ipv6: ipsToByteSlices(m.IPv6),
Ipv4: agdprotobuf.IPsToByteSlices(m.IPv4),
Ipv6: agdprotobuf.IPsToByteSlices(m.IPv6),
},
}
case *dnsmsg.BlockingModeNXDOMAIN:
@@ -852,29 +880,6 @@ func safeBrowsingBlockingModeToProtobuf(
}
}
// ipsToByteSlices is a wrapper around netip.Addr.MarshalBinary that ignores the
// always-nil errors.
func ipsToByteSlices(ips []netip.Addr) (data [][]byte) {
if ips == nil {
return nil
}
data = make([][]byte, 0, len(ips))
for _, ip := range ips {
data = append(data, ipToBytes(ip))
}
return data
}
// ipToBytes is a wrapper around netip.Addr.MarshalBinary that ignores the
// always-nil error.
func ipToBytes(ip netip.Addr) (b []byte) {
b, _ = ip.MarshalBinary()
return b
}
// ratelimiterToProtobuf converts the rate-limit settings to protobuf.
func ratelimiterToProtobuf(c *agd.RatelimitConfig) (r *Ratelimiter) {
if c == nil {
@@ -895,10 +900,10 @@ func devicesToProtobuf(devices []*agd.Device) (pbDevices []*Device) {
pbDevices = append(pbDevices, &Device{
Authentication: authToProtobuf(d.Auth),
DeviceId: string(d.ID),
LinkedIp: ipToBytes(d.LinkedIP),
LinkedIp: agdprotobuf.IPToBytes(d.LinkedIP),
HumanIdLower: string(d.HumanIDLower),
DeviceName: string(d.Name),
DedicatedIps: ipsToByteSlices(d.DedicatedIPs),
DedicatedIps: agdprotobuf.IPsToByteSlices(d.DedicatedIPs),
FilteringEnabled: d.FilteringEnabled,
})
}
@@ -925,7 +930,9 @@ func dohPasswordToProtobuf(
p agdpasswd.Authenticator,
) (pbp isAuthenticationSettings_DohPasswordHash) {
switch p := p.(type) {
case agdpasswd.AllowAuthenticator:
// TODO(a.garipov): Remove nil once we make sure that the caches on prod
// are valid.
case nil, agdpasswd.AllowAuthenticator:
return nil
case *agdpasswd.PasswordHashBcrypt:
return &AuthenticationSettings_PasswordHashBcrypt{

View File

@@ -105,12 +105,12 @@ func BenchmarkCache(b *testing.B) {
// Most recent results:
//
// goos: linux
// goarch: amd64
// goos: darwin
// goarch: arm64
// pkg: github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/filecachepb
// cpu: AMD Ryzen 7 PRO 4750U with Radeon Graphics
// BenchmarkCache/to_protobuf-16 274957 5419 ns/op 3000 B/op 60 allocs/op
// BenchmarkCache/from_protobuf-16 40694 27856 ns/op 9792 B/op 70 allocs/op
// BenchmarkCache/encode-16 171058 6776 ns/op 480 B/op 1 allocs/op
// BenchmarkCache/decode-16 72825 14257 ns/op 3016 B/op 74 allocs/op
// cpu: Apple M1 Pro
// BenchmarkCache/to_protobuf-8 553504 2186 ns/op 3240 B/op 65 allocs/op
// BenchmarkCache/from_protobuf-8 49960 24243 ns/op 10096 B/op 77 allocs/op
// BenchmarkCache/encode-8 456519 2715 ns/op 512 B/op 1 allocs/op
// BenchmarkCache/decode-8 243376 5100 ns/op 3280 B/op 81 allocs/op
}

View File

@@ -97,18 +97,32 @@ func (s *Storage) Load(ctx context.Context) (c *internal.FileCache, err error) {
}
// Store implements the [internal.FileCacheStorage] interface for *Storage.
func (s *Storage) Store(ctx context.Context, c *internal.FileCache) (err error) {
func (s *Storage) Store(
ctx context.Context,
c *internal.FileCache,
) (n datasize.ByteSize, err error) {
profNum := len(c.Profiles)
devNum := len(c.Devices)
s.logger.InfoContext(ctx, "saving profiles", "path", s.path, "num", profNum)
defer s.logger.InfoContext(ctx, "saved profiles", "path", s.path, "num", profNum)
s.logger.InfoContext(ctx, "saving profiles", "path", s.path, "num", profNum, "devs", devNum)
defer s.logger.InfoContext(
ctx,
"saved profiles",
"path", s.path,
"num", profNum,
"devs", devNum,
)
fc := toProtobuf(c)
b, err := proto.Marshal(fc)
if err != nil {
return fmt.Errorf("encoding protobuf: %w", err)
return 0, fmt.Errorf("encoding protobuf: %w", err)
}
// Don't wrap the error, because it's informative enough as is.
return renameio.WriteFile(s.path, b, 0o600)
err = renameio.WriteFile(s.path, b, 0o600)
if err != nil {
return 0, fmt.Errorf("writing file: %w", err)
}
return datasize.ByteSize(len(b)), nil
}

View File

@@ -34,8 +34,9 @@ func TestStorage(t *testing.T) {
}
ctx := profiledbtest.ContextWithTimeout(t)
err := s.Store(ctx, fc)
n, err := s.Store(ctx, fc)
require.NoError(t, err)
assert.Positive(t, n)
gotFC, err := s.Load(ctx)
require.NoError(t, err)

View File

@@ -8,6 +8,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/agd"
"github.com/AdguardTeam/golibs/errors"
"github.com/c2h5oh/datasize"
)
// FileCacheVersion is the version of cached data structure. It must be
@@ -35,9 +36,10 @@ type FileCacheStorage interface {
// must return a nil *FileCache. Load must return an informative error.
Load(ctx context.Context) (c *FileCache, err error)
// Store writes the data to the cache file. c must not be nil. Store must
// return an informative error.
Store(ctx context.Context, c *FileCache) (err error)
// Store writes the data to the cache file. c must not be nil. Returns the
// length of the written file and error. The returned error must be
// informative.
Store(ctx context.Context, c *FileCache) (n datasize.ByteSize, err error)
}
// EmptyFileCacheStorage is the empty file-cache storage that does nothing and
@@ -53,4 +55,6 @@ func (EmptyFileCacheStorage) Load(_ context.Context) (_ *FileCache, _ error) { r
// Store implements the [FileCacheStorage] interface for EmptyFileCacheStorage.
// It does nothing and returns nil.
func (EmptyFileCacheStorage) Store(_ context.Context, _ *FileCache) (_ error) { return nil }
func (EmptyFileCacheStorage) Store(_ context.Context, _ *FileCache) (_ datasize.ByteSize, _ error) {
return 0, nil
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/AdguardTeam/AdGuardDNS/internal/filter/custom"
"github.com/AdguardTeam/AdGuardDNS/internal/geoip"
"github.com/AdguardTeam/golibs/container"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/logutil/slogutil"
"github.com/AdguardTeam/golibs/testutil"
"github.com/c2h5oh/datasize"
@@ -52,6 +53,26 @@ const WellKnownPath = "/.well-known/pki-validation/abcd1234"
// Logger is the common logger for tests.
var Logger = slogutil.NewDiscardLogger()
var (
// IPv4 is a common IPv4 address for tests.
IPv4 = netip.MustParseAddr("192.0.2.0")
// IPv6 is a common IPv6 address for tests.
IPv6 = netip.MustParseAddr("2001:db8::")
// IPv4Bytes is a common binary marshalled IPv4 address for tests.
IPv4Bytes = errors.Must(IPv4.MarshalBinary())
// IPv6Bytes is a common binary marshalled IPv6 address for tests.
IPv6Bytes = errors.Must(IPv6.MarshalBinary())
// IPv4Prefix is a common IPv4 prefix for tests.
IPv4Prefix = netip.PrefixFrom(IPv4, 24)
// IPv6Prefix is a common IPv6 prefix for tests.
IPv6Prefix = netip.PrefixFrom(IPv6, 32)
)
// ProfileAccessConstructor is the common constructor of profile access managers
// for tests.
var ProfileAccessConstructor = access.NewProfileConstructor(&access.ProfileConstructorConfig{
@@ -64,6 +85,26 @@ func ContextWithTimeout(tb testing.TB) (ctx context.Context) {
return testutil.ContextWithTimeout(tb, Timeout)
}
// NewDevice returns a new device with the given ID and name for tests.
func NewDevice(tb testing.TB, id agd.DeviceID, name agd.DeviceName) (d *agd.Device) {
tb.Helper()
return &agd.Device{
Auth: &agd.AuthSettings{
Enabled: true,
DoHAuthOnly: true,
PasswordHash: agdpasswd.NewPasswordHashBcrypt([]byte("test")),
},
ID: id,
LinkedIP: netip.MustParseAddr("1.2.3.4"),
Name: name,
DedicatedIPs: []netip.Addr{
netip.MustParseAddr("1.2.4.5"),
},
FilteringEnabled: true,
}
}
// NewProfile returns the common profile and device for tests. The profile has
// ID [ProfileID] and the device, [DeviceID]. The response size estimate for
// the rate limiter is [RespSzEst].
@@ -73,24 +114,13 @@ func NewProfile(tb testing.TB) (p *agd.Profile, d *agd.Device) {
loc, err := agdtime.LoadLocation("Europe/Brussels")
require.NoError(tb, err)
dev := &agd.Device{
Auth: &agd.AuthSettings{
Enabled: true,
DoHAuthOnly: true,
PasswordHash: agdpasswd.NewPasswordHashBcrypt([]byte("test")),
},
ID: DeviceID,
LinkedIP: netip.MustParseAddr("1.2.3.4"),
Name: "dev1",
DedicatedIPs: []netip.Addr{
netip.MustParseAddr("1.2.4.5"),
},
FilteringEnabled: true,
}
const schedEnd = 701
parental := &filter.ConfigParental{
Categories: &filter.ConfigCategories{
IDs: []filter.CategoryID{"games"},
Enabled: true,
},
PauseSchedule: &filter.ConfigSchedule{
Week: &filter.WeeklySchedule{
time.Monday: {Start: 0, End: schedEnd},
@@ -136,6 +166,8 @@ func NewProfile(tb testing.TB) (p *agd.Profile, d *agd.Device) {
Rules: []filter.RuleText{"|blocked-by-custom.example^"},
}
dev := NewDevice(tb, DeviceID, "dev1")
return &agd.Profile{
CustomDomains: customDomains,
FilterConfig: &filter.ConfigClient{

View File

@@ -3,6 +3,8 @@ package profiledb
import (
"context"
"time"
"github.com/c2h5oh/datasize"
)
// Metrics is an interface that is used for the collection of the user profiles
@@ -21,6 +23,17 @@ type Metrics interface {
// IncrementDeleted increments the total number of deleted user profiles.
IncrementDeleted(ctx context.Context)
// SetLastFileCacheSyncTime sets the time of the last successful cache file
// synchronization.
SetLastFileCacheSyncTime(ctx context.Context, t time.Time)
// SetFileCacheSize sets the size of the cache file.
SetFileCacheSize(ctx context.Context, size datasize.ByteSize)
// ObserveFileCacheStoreDuration observes the duration of storing file cache
// to disk.
ObserveFileCacheStoreDuration(ctx context.Context, d time.Duration)
}
// UpdateMetrics is an alias for a structure that contains the information about
@@ -64,3 +77,13 @@ func (EmptyMetrics) IncrementSyncTimeouts(_ context.Context, _ bool) {}
// IncrementDeleted implements the [Metrics] interface for EmptyMetrics.
func (EmptyMetrics) IncrementDeleted(_ context.Context) {}
// SetLastFileCacheSyncTime implements the [Metrics] interface for EmptyMetrics.
func (EmptyMetrics) SetLastFileCacheSyncTime(_ context.Context, _ time.Time) {}
// SetFileCacheSize implements the [Metrics] interface for EmptyMetrics.
func (EmptyMetrics) SetFileCacheSize(_ context.Context, _ datasize.ByteSize) {}
// ObserveFileCacheStoreDuration implements the [Metrics] interface for
// EmptyMetrics.
func (EmptyMetrics) ObserveFileCacheStoreDuration(_ context.Context, _ time.Duration) {}

View File

@@ -52,6 +52,7 @@ func newProfileDB(tb testing.TB, c *profiledb.Config) (db *profiledb.Default) {
c.Storage = cmp.Or[profiledb.Storage](c.Storage, agdtest.NewProfileStorage())
c.CacheFilePath = cmp.Or(c.CacheFilePath, "none")
c.CacheFileIvl = cmp.Or(c.CacheFileIvl, 1*time.Minute)
c.FullSyncIvl = cmp.Or(c.FullSyncIvl, 1*time.Minute)
c.FullSyncRetryIvl = cmp.Or(c.FullSyncRetryIvl, 1*time.Minute)

View File

@@ -177,12 +177,17 @@ type jsonlEntry struct {
// FilterListID is the ID of the first filter the rules of which matched.
// If no rules matched, this field is omitted.
//
// If the entry is matched by a blocked service, FilterListID is set to
// [filter.IDBlockedService]. In case of a category filter, FilterListID
// is set to [filter.IDCategory].
//
// The short name "l" stands for "list of filter rules".
FilterListID filter.ID `json:"l,omitempty"`
// FilterRule is the first rule that matched the request or the ID of the
// blocked service, if FilterListID is [filter.IDBlockedService]. If no
// rules matched, this field is omitted.
// FilterRule is the reason why the request was filtered. It is the text of
// the rule that matched the request, or the ID of the blocked service, or
// the ID of the category. If no rules matched, this field is
// omitted.
//
// The short name "m" stands for "match".
FilterRule filter.RuleText `json:"m,omitempty"`

View File

@@ -281,7 +281,11 @@ func (s *mockDNSServiceServer) newDNSProfile(isFullSync bool) (dp *backendpb.DNS
Rps: 100,
Enabled: true,
},
CustomDomain: customDomain,
CustomDomain: customDomain,
CategoryFilter: &backendpb.CategoryFilterSettings{
Ids: []string{"games"},
Enabled: true,
},
AccountId: "acc1234",
DeviceChanges: deviceChanges,
StandardAccessSettingsEnabled: true,

View File

@@ -55,7 +55,7 @@ readonly committime
# Compile them in.
version_pkg='github.com/AdguardTeam/AdGuardDNS/internal/version'
ldflags="-s -w"
ldflags=""
ldflags="${ldflags} -X ${version_pkg}.branch=${branch}"
ldflags="${ldflags} -X ${version_pkg}.committime=${committime}"
ldflags="${ldflags} -X ${version_pkg}.revision=${revision}"

View File

@@ -42,19 +42,18 @@ ecscache() (
)
fcpb() (
# TODO(f.setrakov): Change directory to ./internal/profiledb/internal/, so
# we don't need to go up later.
cd ./internal/profiledb/internal/filecachepb/
cd ./internal/profiledb/internal/fcpb/
protoc \
--go_opt=paths=source_relative \
--go_out=../fcpb/ \
--go_out=./ \
--go_opt=default_api_level=API_OPAQUE \
--go_opt=Mfilecache.proto=github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb \
./filecache.proto
--go_opt=Mfc.proto=github.com/AdguardTeam/AdGuardDNS/internal/profiledb/internal/fcpb \
./fc.proto
)
filecachepb() (
cd ./internal/profiledb/internal/filecachepb/
protoc \
--go_opt=paths=source_relative \
--go_out=. \