all: sync with master; upd chlog

This commit is contained in:
Ainar Garipov 2023-07-12 15:13:31 +03:00
parent 19347d263a
commit ec83d0eb86
55 changed files with 1699 additions and 1006 deletions

View file

@ -1,7 +1,7 @@
'name': 'build' 'name': 'build'
'env': 'env':
'GO_VERSION': '1.19.10' 'GO_VERSION': '1.19.11'
'NODE_VERSION': '14' 'NODE_VERSION': '14'
'on': 'on':

View file

@ -1,7 +1,7 @@
'name': 'lint' 'name': 'lint'
'env': 'env':
'GO_VERSION': '1.19.10' 'GO_VERSION': '1.19.11'
'on': 'on':
'push': 'push':

1
.gitignore vendored
View file

@ -9,6 +9,7 @@
*.db *.db
*.log *.log
*.snap *.snap
*.test
/agh-backup/ /agh-backup/
/bin/ /bin/
/build/* /build/*

View file

@ -14,11 +14,11 @@ and this project adheres to
<!-- <!--
## [v0.108.0] - TBA ## [v0.108.0] - TBA
## [v0.107.34] - 2023-07-26 (APPROX.) ## [v0.107.35] - 2023-08-02 (APPROX.)
See also the [v0.107.34 GitHub milestone][ms-v0.107.34]. See also the [v0.107.35 GitHub milestone][ms-v0.107.35].
[ms-v0.107.34]: https://github.com/AdguardTeam/AdGuardHome/milestone/69?closed=1 [ms-v0.107.35]: https://github.com/AdguardTeam/AdGuardHome/milestone/70?closed=1
NOTE: Add new changes BELOW THIS COMMENT. NOTE: Add new changes BELOW THIS COMMENT.
--> -->
@ -29,6 +29,88 @@ NOTE: Add new changes ABOVE THIS COMMENT.
## [v0.107.34] - 2023-07-12
See also the [v0.107.34 GitHub milestone][ms-v0.107.34].
### Security
- Go version has been updated to prevent the possibility of exploiting the
CVE-2023-29406 Go vulnerability fixed in [Go 1.19.11][go-1.19.11].
### Added
- Ability to ignore queries for the root domain, such as `NS .` queries
([#5990]).
### Changed
- Improved CPU and RAM consumption during updates of filtering-rule lists.
#### Configuration Changes
In this release, the schema version has changed from 23 to 24.
- Properties starting with `log_`, and `verbose` property, which used to set up
logging are now moved to the new object `log` containing new properties
`file`, `max_backups`, `max_size`, `max_age`, `compress`, `local_time`, and
`verbose`:
```yaml
# BEFORE:
'log_file': ""
'log_max_backups': 0
'log_max_size': 100
'log_max_age': 3
'log_compress': false
'log_localtime': false
'verbose': false
# AFTER:
'log':
'file': ""
'max_backups': 0
'max_size': 100
'max_age': 3
'compress': false
'local_time': false
'verbose': false
```
To rollback this change, remove the new object `log`, set back `log_` and
`verbose` properties and change the `schema_version` back to `23`.
### Deprecated
- Default exposure of the non-standard ports 784 and 8853 for DNS-over-QUIC in
the `Dockerfile`.
### Fixed
- Two unspecified IPs when a host is blocked in two filter lists ([#5972]).
- Incorrect setting of Parental Control cache size.
- Excessive RAM and CPU consumption by Safe Browsing and Parental Control
filters ([#5896]).
### Removed
- The `HEALTHCHECK` section and the use of `tini` in the `ENTRYPOINT` section in
`Dockerfile` ([#5939]). They caused a lot of issues, especially with tools
like `docker-compose` and `podman`.
**NOTE:** Some Docker tools may cache `ENTRYPOINT` sections, so some users may
be required to backup their configuration, stop the container, purge the old
image, and reload it from scratch.
[#5896]: https://github.com/AdguardTeam/AdGuardHome/issues/5896
[#5972]: https://github.com/AdguardTeam/AdGuardHome/issues/5972
[#5990]: https://github.com/AdguardTeam/AdGuardHome/issues/5990
[go-1.19.11]: https://groups.google.com/g/golang-announce/c/2q13H6LEEx0/m/sduSepLLBwAJ
[ms-v0.107.34]: https://github.com/AdguardTeam/AdGuardHome/milestone/69?closed=1
## [v0.107.33] - 2023-07-03 ## [v0.107.33] - 2023-07-03
See also the [v0.107.33 GitHub milestone][ms-v0.107.33]. See also the [v0.107.33 GitHub milestone][ms-v0.107.33].
@ -147,9 +229,9 @@ In this release, the schema version has changed from 20 to 23.
### Deprecated ### Deprecated
- `HEALTHCHECK` and `ENTRYPOINT` sections in `Dockerfile` ([#5939]). They cause - The `HEALTHCHECK` section and the use of `tini` in the `ENTRYPOINT` section in
a lot of issues, especially with tools like `docker-compose` and `podman`, and `Dockerfile` ([#5939]). They cause a lot of issues, especially with tools
will be removed in a future release. like `docker-compose` and `podman`, and will be removed in a future release.
- Flags `-h`, `--host`, `-p`, `--port` have been deprecated. The `-h` flag - Flags `-h`, `--host`, `-p`, `--port` have been deprecated. The `-h` flag
will work as an alias for `--help`, instead of the deprecated `--host` in the will work as an alias for `--help`, instead of the deprecated `--host` in the
future releases. future releases.
@ -2160,11 +2242,12 @@ See also the [v0.104.2 GitHub milestone][ms-v0.104.2].
<!-- <!--
[Unreleased]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.34...HEAD [Unreleased]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.35...HEAD
[v0.107.34]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.33...v0.107.34 [v0.107.35]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.34...v0.107.35
--> -->
[Unreleased]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.33...HEAD [Unreleased]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.34...HEAD
[v0.107.34]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.33...v0.107.34
[v0.107.33]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.32...v0.107.33 [v0.107.33]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.32...v0.107.33
[v0.107.32]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.31...v0.107.32 [v0.107.32]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.31...v0.107.32
[v0.107.31]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.30...v0.107.31 [v0.107.31]: https://github.com/AdguardTeam/AdGuardHome/compare/v0.107.30...v0.107.31

View file

@ -75,7 +75,7 @@ build: deps quick-build
quick-build: js-build go-build quick-build: js-build go-build
ci: deps test ci: deps test go-bench go-fuzz
deps: js-deps go-deps deps: js-deps go-deps
lint: js-lint go-lint lint: js-lint go-lint
@ -101,8 +101,10 @@ js-deps:
js-lint: ; $(NPM) $(NPM_FLAGS) run lint js-lint: ; $(NPM) $(NPM_FLAGS) run lint
js-test: ; $(NPM) $(NPM_FLAGS) run test js-test: ; $(NPM) $(NPM_FLAGS) run test
go-bench: ; $(ENV) "$(SHELL)" ./scripts/make/go-bench.sh
go-build: ; $(ENV) "$(SHELL)" ./scripts/make/go-build.sh go-build: ; $(ENV) "$(SHELL)" ./scripts/make/go-build.sh
go-deps: ; $(ENV) "$(SHELL)" ./scripts/make/go-deps.sh go-deps: ; $(ENV) "$(SHELL)" ./scripts/make/go-deps.sh
go-fuzz: ; $(ENV) "$(SHELL)" ./scripts/make/go-fuzz.sh
go-lint: ; $(ENV) "$(SHELL)" ./scripts/make/go-lint.sh go-lint: ; $(ENV) "$(SHELL)" ./scripts/make/go-lint.sh
go-tools: ; $(ENV) "$(SHELL)" ./scripts/make/go-tools.sh go-tools: ; $(ENV) "$(SHELL)" ./scripts/make/go-tools.sh

View file

@ -7,7 +7,7 @@
# Make sure to sync any changes with the branch overrides below. # Make sure to sync any changes with the branch overrides below.
'variables': 'variables':
'channel': 'edge' 'channel': 'edge'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
'stages': 'stages':
- 'Build frontend': - 'Build frontend':
@ -272,7 +272,7 @@
# need to build a few of these. # need to build a few of these.
'variables': 'variables':
'channel': 'beta' 'channel': 'beta'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
# release-vX.Y.Z branches are the branches from which the actual final # release-vX.Y.Z branches are the branches from which the actual final
# release is built. # release is built.
- '^release-v[0-9]+\.[0-9]+\.[0-9]+': - '^release-v[0-9]+\.[0-9]+\.[0-9]+':
@ -287,4 +287,4 @@
# are the ones that actually get released. # are the ones that actually get released.
'variables': 'variables':
'channel': 'release' 'channel': 'release'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'

View file

@ -10,7 +10,7 @@
# Make sure to sync any changes with the branch overrides below. # Make sure to sync any changes with the branch overrides below.
'variables': 'variables':
'channel': 'edge' 'channel': 'edge'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
'snapcraftChannel': 'edge' 'snapcraftChannel': 'edge'
'stages': 'stages':
@ -191,7 +191,7 @@
# need to build a few of these. # need to build a few of these.
'variables': 'variables':
'channel': 'beta' 'channel': 'beta'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
'snapcraftChannel': 'beta' 'snapcraftChannel': 'beta'
# release-vX.Y.Z branches are the branches from which the actual final # release-vX.Y.Z branches are the branches from which the actual final
# release is built. # release is built.
@ -207,5 +207,5 @@
# are the ones that actually get released. # are the ones that actually get released.
'variables': 'variables':
'channel': 'release' 'channel': 'release'
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
'snapcraftChannel': 'candidate' 'snapcraftChannel': 'candidate'

View file

@ -5,7 +5,7 @@
'key': 'AHBRTSPECS' 'key': 'AHBRTSPECS'
'name': 'AdGuard Home - Build and run tests' 'name': 'AdGuard Home - Build and run tests'
'variables': 'variables':
'dockerGo': 'adguard/golang-ubuntu:6.7' 'dockerGo': 'adguard/golang-ubuntu:6.8'
'stages': 'stages':
- 'Tests': - 'Tests':

View file

@ -1,6 +1,6 @@
# A docker file for scripts/make/build-docker.sh. # A docker file for scripts/make/build-docker.sh.
FROM alpine:3.17 FROM alpine:3.18
ARG BUILD_DATE ARG BUILD_DATE
ARG VERSION ARG VERSION
@ -25,8 +25,6 @@ RUN apk --no-cache add ca-certificates libcap tzdata && \
mkdir -p /opt/adguardhome/conf /opt/adguardhome/work && \ mkdir -p /opt/adguardhome/conf /opt/adguardhome/work && \
chown -R nobody: /opt/adguardhome chown -R nobody: /opt/adguardhome
RUN apk --no-cache add tini
ARG DIST_DIR ARG DIST_DIR
ARG TARGETARCH ARG TARGETARCH
ARG TARGETOS ARG TARGETOS
@ -43,43 +41,24 @@ RUN setcap 'cap_net_bind_service=+eip' /opt/adguardhome/AdGuardHome
# 68 : UDP : DHCP (client) # 68 : UDP : DHCP (client)
# 80 : TCP : HTTP (main) # 80 : TCP : HTTP (main)
# 443 : TCP, UDP : HTTPS, DNS-over-HTTPS (incl. HTTP/3), DNSCrypt (main) # 443 : TCP, UDP : HTTPS, DNS-over-HTTPS (incl. HTTP/3), DNSCrypt (main)
# 784 : UDP : DNS-over-QUIC (experimental) # 784 : UDP : DNS-over-QUIC (deprecated; use 853)
# 853 : TCP, UDP : DNS-over-TLS, DNS-over-QUIC # 853 : TCP, UDP : DNS-over-TLS, DNS-over-QUIC
# 3000 : TCP, UDP : HTTP(S) (alt, incl. HTTP/3) # 3000 : TCP, UDP : HTTP(S) (alt, incl. HTTP/3)
# 3001 : TCP, UDP : HTTP(S) (beta, incl. HTTP/3)
# 5443 : TCP, UDP : DNSCrypt (alt) # 5443 : TCP, UDP : DNSCrypt (alt)
# 6060 : TCP : HTTP (pprof) # 6060 : TCP : HTTP (pprof)
# 8853 : UDP : DNS-over-QUIC (experimental) # 8853 : UDP : DNS-over-QUIC (deprecated; use 853)
# #
# TODO(a.garipov): Remove the old, non-standard 784 and 8853 ports for # TODO(a.garipov): Remove the old, non-standard 784 and 8853 ports for
# DNS-over-QUIC in a future release. # DNS-over-QUIC in a future release.
EXPOSE 53/tcp 53/udp 67/udp 68/udp 80/tcp 443/tcp 443/udp 784/udp\ EXPOSE 53/tcp 53/udp 67/udp 68/udp 80/tcp 443/tcp 443/udp 784/udp\
853/tcp 853/udp 3000/tcp 3000/udp 5443/tcp\ 853/tcp 853/udp 3000/tcp 3000/udp 5443/tcp 5443/udp 6060/tcp\
5443/udp 6060/tcp 8853/udp 8853/udp
WORKDIR /opt/adguardhome/work WORKDIR /opt/adguardhome/work
# Install helpers for healthcheck. ENTRYPOINT ["/opt/adguardhome/AdGuardHome"]
COPY --chown=nobody:nogroup\
./${DIST_DIR}/docker/scripts\
/opt/adguardhome/scripts
HEALTHCHECK \
--interval=30s \
--timeout=10s \
--retries=3 \
CMD [ "/opt/adguardhome/scripts/healthcheck.sh" ]
# It seems that the healthckech script sometimes spawns zombie processes, so we
# need a way to handle them, since AdGuard Home doesn't know how to keep track
# of the processes delegated to it by the OS. Use tini as entry point because
# it needs the PID=1 to be the default parent for orphaned processes.
#
# See https://github.com/adguardTeam/adGuardHome/issues/3290.
ENTRYPOINT [ "/sbin/tini", "--" ]
CMD [ \ CMD [ \
"/opt/adguardhome/AdGuardHome", \
"--no-check-update", \ "--no-check-update", \
"-c", "/opt/adguardhome/conf/AdGuardHome.yaml", \ "-c", "/opt/adguardhome/conf/AdGuardHome.yaml", \
"-w", "/opt/adguardhome/work" \ "-w", "/opt/adguardhome/work" \

View file

@ -1,29 +0,0 @@
/^[^[:space:]]/ { is_dns = /^dns:/ }
/^[[:space:]]+bind_hosts:/ { if (is_dns) prev_line = FNR }
/^[[:space:]]+- .+/ {
if (FNR - prev_line == 1) {
addrs[$2] = true
prev_line = FNR
if ($2 == "0.0.0.0" || $2 == "'::'") {
# Drop all the other addresses.
delete addrs
addrs[""] = true
prev_line = -1
}
}
}
/^[[:space:]]+port:/ { if (is_dns) port = $2 }
END {
for (addr in addrs) {
if (match(addr, ":")) {
print "[" addr "]:" port
} else {
print addr ":" port
}
}
}

View file

@ -1,107 +0,0 @@
#!/bin/sh
# AdGuard Home Docker healthcheck script
# Exit the script if a pipeline fails (-e), prevent accidental filename
# expansion (-f), and consider undefined variables as errors (-u).
set -e -f -u
# Function error_exit is an echo wrapper that writes to stderr and stops the
# script execution with code 1.
error_exit() {
echo "$1" 1>&2
exit 1
}
agh_dir="/opt/adguardhome"
readonly agh_dir
filename="${agh_dir}/conf/AdGuardHome.yaml"
readonly filename
if ! [ -f "$filename" ]
then
wget "http://127.0.0.1:3000" -O /dev/null -q || exit 1
exit 0
fi
help_dir="${agh_dir}/scripts"
readonly help_dir
# Parse web host
web_url="$( awk -f "${help_dir}/web-bind.awk" "$filename" )"
readonly web_url
if [ "$web_url" = '' ]
then
error_exit "no web bindings could be retrieved from $filename"
fi
# TODO(e.burkov): Deal with 0 port.
case "$web_url"
in
(*':0')
error_exit '0 in web port is not supported by healthcheck'
;;
(*)
# Go on.
;;
esac
# Parse DNS hosts
dns_hosts="$( awk -f "${help_dir}/dns-bind.awk" "$filename" )"
readonly dns_hosts
if [ "$dns_hosts" = '' ]
then
error_exit "no DNS bindings could be retrieved from $filename"
fi
first_dns="$( echo "$dns_hosts" | head -n 1 )"
readonly first_dns
# TODO(e.burkov): Deal with 0 port.
case "$first_dns"
in
(*':0')
error_exit '0 in DNS port is not supported by healthcheck'
;;
(*)
# Go on.
;;
esac
# Check
# Skip SSL certificate validation since there is no guarantee the container
# trusts the one used. It should be safe to drop the SSL validation since the
# current script intended to be used from inside the container and only checks
# the endpoint availability, ignoring the content of the response.
#
# See https://github.com/AdguardTeam/AdGuardHome/issues/5642.
wget --no-check-certificate "$web_url" -O /dev/null -q || exit 1
test_fqdn="healthcheck.adguardhome.test."
readonly test_fqdn
# The awk script currently returns only port prefixed with colon in case of
# unspecified address.
case "$first_dns"
in
(':'*)
nslookup -type=a "$test_fqdn" "127.0.0.1${first_dns}" > /dev/null ||\
nslookup -type=a "$test_fqdn" "[::1]${first_dns}" > /dev/null ||\
error_exit "nslookup failed for $host"
;;
(*)
echo "$dns_hosts" | while read -r host
do
nslookup -type=a "$test_fqdn" "$host" > /dev/null ||\
error_exit "nslookup failed for $host"
done
;;
esac

View file

@ -1,5 +0,0 @@
# Don't consider the HTTPS hostname since the enforced HTTPS redirection should
# work if the SSL check skipped. See file docker/healthcheck.sh.
/^[^[:space:]]/ { is_http = /^http:/ }
/^[[:space:]]+address:/ { if (is_http) print "http://" $2 }

3
go.mod
View file

@ -9,7 +9,9 @@ require (
github.com/AdguardTeam/urlfilter v0.16.1 github.com/AdguardTeam/urlfilter v0.16.1
github.com/NYTimes/gziphandler v1.1.1 github.com/NYTimes/gziphandler v1.1.1
github.com/ameshkov/dnscrypt/v2 v2.2.7 github.com/ameshkov/dnscrypt/v2 v2.2.7
github.com/bluele/gcache v0.0.2
github.com/digineo/go-ipset/v2 v2.2.1 github.com/digineo/go-ipset/v2 v2.2.1
github.com/dimfeld/httptreemux/v5 v5.5.0
github.com/fsnotify/fsnotify v1.6.0 github.com/fsnotify/fsnotify v1.6.0
github.com/go-ping/ping v1.1.0 github.com/go-ping/ping v1.1.0
github.com/google/go-cmp v0.5.9 github.com/google/go-cmp v0.5.9
@ -44,7 +46,6 @@ require (
github.com/aead/poly1305 v0.0.0-20180717145839-3fee0db0b635 // indirect github.com/aead/poly1305 v0.0.0-20180717145839-3fee0db0b635 // indirect
github.com/ameshkov/dnsstamps v1.0.3 // indirect github.com/ameshkov/dnsstamps v1.0.3 // indirect
github.com/beefsack/go-rate v0.0.0-20220214233405-116f4ca011a0 // indirect github.com/beefsack/go-rate v0.0.0-20220214233405-116f4ca011a0 // indirect
github.com/bluele/gcache v0.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/golang/mock v1.6.0 // indirect github.com/golang/mock v1.6.0 // indirect

2
go.sum
View file

@ -29,6 +29,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/digineo/go-ipset/v2 v2.2.1 h1:k6skY+0fMqeUjjeWO/m5OuWPSZUAn7AucHMnQ1MX77g= github.com/digineo/go-ipset/v2 v2.2.1 h1:k6skY+0fMqeUjjeWO/m5OuWPSZUAn7AucHMnQ1MX77g=
github.com/digineo/go-ipset/v2 v2.2.1/go.mod h1:wBsNzJlZlABHUITkesrggFnZQtgW5wkqw1uo8Qxe0VU= github.com/digineo/go-ipset/v2 v2.2.1/go.mod h1:wBsNzJlZlABHUITkesrggFnZQtgW5wkqw1uo8Qxe0VU=
github.com/dimfeld/httptreemux/v5 v5.5.0 h1:p8jkiMrCuZ0CmhwYLcbNbl7DDo21fozhKHQ2PccwOFQ=
github.com/dimfeld/httptreemux/v5 v5.5.0/go.mod h1:QeEylH57C0v3VO0tkKraVz9oD3Uu93CKPnTLbsidvSw=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY= github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw= github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=

43
internal/aghnet/addr.go Normal file
View file

@ -0,0 +1,43 @@
package aghnet
import (
"fmt"
"strings"
"github.com/AdguardTeam/golibs/stringutil"
)
// NormalizeDomain returns a lowercased version of host without the final dot,
// unless host is ".", in which case it returns it unchanged. That is a special
// case that to allow matching queries like:
//
// dig IN NS '.'
func NormalizeDomain(host string) (norm string) {
if host == "." {
return host
}
return strings.ToLower(strings.TrimSuffix(host, "."))
}
// NewDomainNameSet returns nil and error, if list has duplicate or empty domain
// name. Otherwise returns a set, which contains domain names normalized using
// [NormalizeDomain].
func NewDomainNameSet(list []string) (set *stringutil.Set, err error) {
set = stringutil.NewSet()
for i, host := range list {
if host == "" {
return nil, fmt.Errorf("at index %d: hostname is empty", i)
}
host = NormalizeDomain(host)
if set.Has(host) {
return nil, fmt.Errorf("duplicate hostname %q at index %d", host, i)
}
set.Add(host)
}
return set, nil
}

View file

@ -0,0 +1,59 @@
package aghnet_test
import (
"testing"
"github.com/AdguardTeam/AdGuardHome/internal/aghnet"
"github.com/AdguardTeam/golibs/testutil"
"github.com/stretchr/testify/assert"
)
func TestNewDomainNameSet(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
wantErrMsg string
in []string
}{{
name: "nil",
wantErrMsg: "",
in: nil,
}, {
name: "success",
wantErrMsg: "",
in: []string{
"Domain.Example",
".",
},
}, {
name: "dups",
wantErrMsg: `duplicate hostname "domain.example" at index 1`,
in: []string{
"Domain.Example",
"domain.example",
},
}, {
name: "bad_domain",
wantErrMsg: "at index 0: hostname is empty",
in: []string{
"",
},
}}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
set, err := aghnet.NewDomainNameSet(tc.in)
testutil.AssertErrorMsg(t, tc.wantErrMsg, err)
if err != nil {
return
}
for _, host := range tc.in {
assert.Truef(t, set.Has(aghnet.NormalizeDomain(host)), "%q not matched", host)
}
})
}
}

View file

@ -1,12 +1,8 @@
package aghnet package aghnet
import ( import (
"fmt"
"net/netip" "net/netip"
"strings" "strings"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/stringutil"
) )
// GenerateHostname generates the hostname from ip. In case of using IPv4 the // GenerateHostname generates the hostname from ip. In case of using IPv4 the
@ -29,32 +25,8 @@ func GenerateHostname(ip netip.Addr) (hostname string) {
hostname = ip.StringExpanded() hostname = ip.StringExpanded()
if ip.Is4() { if ip.Is4() {
return strings.Replace(hostname, ".", "-", -1) return strings.ReplaceAll(hostname, ".", "-")
} }
return strings.Replace(hostname, ":", "-", -1) return strings.ReplaceAll(hostname, ":", "-")
}
// NewDomainNameSet returns nil and error, if list has duplicate or empty
// domain name. Otherwise returns a set, which contains non-FQDN domain names,
// and nil error.
func NewDomainNameSet(list []string) (set *stringutil.Set, err error) {
set = stringutil.NewSet()
for i, v := range list {
host := strings.ToLower(strings.TrimSuffix(v, "."))
// TODO(a.garipov): Think about ignoring empty (".") names in the
// future.
if host == "" {
return nil, errors.Error("host name is empty")
}
if set.Has(host) {
return nil, fmt.Errorf("duplicate host name %q at index %d", host, i)
}
set.Add(host)
}
return set, nil
} }

View file

@ -1,10 +1,12 @@
package aghtest package aghtest
import ( import (
"context"
"io"
"io/fs" "io/fs"
"net"
"github.com/AdguardTeam/AdGuardHome/internal/aghos" "github.com/AdguardTeam/AdGuardHome/internal/aghos"
"github.com/AdguardTeam/AdGuardHome/internal/next/agh"
"github.com/AdguardTeam/dnsproxy/upstream" "github.com/AdguardTeam/dnsproxy/upstream"
"github.com/miekg/dns" "github.com/miekg/dns"
) )
@ -17,23 +19,23 @@ import (
// Package fs // Package fs
// type check // FS is a fake [fs.FS] implementation for tests.
var _ fs.FS = &FS{}
// FS is a mock [fs.FS] implementation for tests.
type FS struct { type FS struct {
OnOpen func(name string) (fs.File, error) OnOpen func(name string) (fs.File, error)
} }
// type check
var _ fs.FS = (*FS)(nil)
// Open implements the [fs.FS] interface for *FS. // Open implements the [fs.FS] interface for *FS.
func (fsys *FS) Open(name string) (fs.File, error) { func (fsys *FS) Open(name string) (fs.File, error) {
return fsys.OnOpen(name) return fsys.OnOpen(name)
} }
// type check // type check
var _ fs.GlobFS = &GlobFS{} var _ fs.GlobFS = (*GlobFS)(nil)
// GlobFS is a mock [fs.GlobFS] implementation for tests. // GlobFS is a fake [fs.GlobFS] implementation for tests.
type GlobFS struct { type GlobFS struct {
// FS is embedded here to avoid implementing all it's methods. // FS is embedded here to avoid implementing all it's methods.
FS FS
@ -46,9 +48,9 @@ func (fsys *GlobFS) Glob(pattern string) ([]string, error) {
} }
// type check // type check
var _ fs.StatFS = &StatFS{} var _ fs.StatFS = (*StatFS)(nil)
// StatFS is a mock [fs.StatFS] implementation for tests. // StatFS is a fake [fs.StatFS] implementation for tests.
type StatFS struct { type StatFS struct {
// FS is embedded here to avoid implementing all it's methods. // FS is embedded here to avoid implementing all it's methods.
FS FS
@ -60,47 +62,34 @@ func (fsys *StatFS) Stat(name string) (fs.FileInfo, error) {
return fsys.OnStat(name) return fsys.OnStat(name)
} }
// Package net // Package io
// type check // Writer is a fake [io.Writer] implementation for tests.
var _ net.Listener = (*Listener)(nil) type Writer struct {
OnWrite func(b []byte) (n int, err error)
// Listener is a mock [net.Listener] implementation for tests.
type Listener struct {
OnAccept func() (conn net.Conn, err error)
OnAddr func() (addr net.Addr)
OnClose func() (err error)
} }
// Accept implements the [net.Listener] interface for *Listener. var _ io.Writer = (*Writer)(nil)
func (l *Listener) Accept() (conn net.Conn, err error) {
return l.OnAccept()
}
// Addr implements the [net.Listener] interface for *Listener. // Write implements the [io.Writer] interface for *Writer.
func (l *Listener) Addr() (addr net.Addr) { func (w *Writer) Write(b []byte) (n int, err error) {
return l.OnAddr() return w.OnWrite(b)
}
// Close implements the [net.Listener] interface for *Listener.
func (l *Listener) Close() (err error) {
return l.OnClose()
} }
// Module adguard-home // Module adguard-home
// Package aghos // Package aghos
// type check // FSWatcher is a fake [aghos.FSWatcher] implementation for tests.
var _ aghos.FSWatcher = (*FSWatcher)(nil)
// FSWatcher is a mock [aghos.FSWatcher] implementation for tests.
type FSWatcher struct { type FSWatcher struct {
OnEvents func() (e <-chan struct{}) OnEvents func() (e <-chan struct{})
OnAdd func(name string) (err error) OnAdd func(name string) (err error)
OnClose func() (err error) OnClose func() (err error)
} }
// type check
var _ aghos.FSWatcher = (*FSWatcher)(nil)
// Events implements the [aghos.FSWatcher] interface for *FSWatcher. // Events implements the [aghos.FSWatcher] interface for *FSWatcher.
func (w *FSWatcher) Events() (e <-chan struct{}) { func (w *FSWatcher) Events() (e <-chan struct{}) {
return w.OnEvents() return w.OnEvents()
@ -116,14 +105,41 @@ func (w *FSWatcher) Close() (err error) {
return w.OnClose() return w.OnClose()
} }
// Package agh
// ServiceWithConfig is a fake [agh.ServiceWithConfig] implementation for tests.
type ServiceWithConfig[ConfigType any] struct {
OnStart func() (err error)
OnShutdown func(ctx context.Context) (err error)
OnConfig func() (c ConfigType)
}
// type check
var _ agh.ServiceWithConfig[struct{}] = (*ServiceWithConfig[struct{}])(nil)
// Start implements the [agh.ServiceWithConfig] interface for
// *ServiceWithConfig.
func (s *ServiceWithConfig[_]) Start() (err error) {
return s.OnStart()
}
// Shutdown implements the [agh.ServiceWithConfig] interface for
// *ServiceWithConfig.
func (s *ServiceWithConfig[_]) Shutdown(ctx context.Context) (err error) {
return s.OnShutdown(ctx)
}
// Config implements the [agh.ServiceWithConfig] interface for
// *ServiceWithConfig.
func (s *ServiceWithConfig[ConfigType]) Config() (c ConfigType) {
return s.OnConfig()
}
// Module dnsproxy // Module dnsproxy
// Package upstream // Package upstream
// type check // UpstreamMock is a fake [upstream.Upstream] implementation for tests.
var _ upstream.Upstream = (*UpstreamMock)(nil)
// UpstreamMock is a mock [upstream.Upstream] implementation for tests.
// //
// TODO(a.garipov): Replace with all uses of Upstream with UpstreamMock and // TODO(a.garipov): Replace with all uses of Upstream with UpstreamMock and
// rename it to just Upstream. // rename it to just Upstream.
@ -133,6 +149,9 @@ type UpstreamMock struct {
OnClose func() (err error) OnClose func() (err error)
} }
// type check
var _ upstream.Upstream = (*UpstreamMock)(nil)
// Address implements the [upstream.Upstream] interface for *UpstreamMock. // Address implements the [upstream.Upstream] interface for *UpstreamMock.
func (u *UpstreamMock) Address() (addr string) { func (u *UpstreamMock) Address() (addr string) {
return u.OnAddress() return u.OnAddress()

View file

@ -17,6 +17,7 @@ import (
"github.com/AdguardTeam/AdGuardHome/internal/dhcpd" "github.com/AdguardTeam/AdGuardHome/internal/dhcpd"
"github.com/AdguardTeam/AdGuardHome/internal/filtering" "github.com/AdguardTeam/AdGuardHome/internal/filtering"
"github.com/AdguardTeam/AdGuardHome/internal/querylog" "github.com/AdguardTeam/AdGuardHome/internal/querylog"
"github.com/AdguardTeam/AdGuardHome/internal/rdns"
"github.com/AdguardTeam/AdGuardHome/internal/stats" "github.com/AdguardTeam/AdGuardHome/internal/stats"
"github.com/AdguardTeam/dnsproxy/proxy" "github.com/AdguardTeam/dnsproxy/proxy"
"github.com/AdguardTeam/dnsproxy/upstream" "github.com/AdguardTeam/dnsproxy/upstream"
@ -277,17 +278,6 @@ func (s *Server) Resolve(host string) ([]net.IPAddr, error) {
return s.internalProxy.LookupIPAddr(host) return s.internalProxy.LookupIPAddr(host)
} }
// RDNSExchanger is a resolver for clients' addresses.
type RDNSExchanger interface {
// Exchange tries to resolve the ip in a suitable way, i.e. either as local
// or as external.
Exchange(ip net.IP) (host string, err error)
// ResolvesPrivatePTR returns true if the RDNSExchanger is able to
// resolve PTR requests for locally-served addresses.
ResolvesPrivatePTR() (ok bool)
}
const ( const (
// ErrRDNSNoData is returned by [RDNSExchanger.Exchange] when the answer // ErrRDNSNoData is returned by [RDNSExchanger.Exchange] when the answer
// section of response is either NODATA or has no PTR records. // section of response is either NODATA or has no PTR records.
@ -299,10 +289,10 @@ const (
) )
// type check // type check
var _ RDNSExchanger = (*Server)(nil) var _ rdns.Exchanger = (*Server)(nil)
// Exchange implements the RDNSExchanger interface for *Server. // Exchange implements the [rdns.Exchanger] interface for *Server.
func (s *Server) Exchange(ip net.IP) (host string, err error) { func (s *Server) Exchange(ip netip.Addr) (host string, err error) {
s.serverLock.RLock() s.serverLock.RLock()
defer s.serverLock.RUnlock() defer s.serverLock.RUnlock()
@ -310,7 +300,7 @@ func (s *Server) Exchange(ip net.IP) (host string, err error) {
return "", nil return "", nil
} }
arpa, err := netutil.IPToReversedAddr(ip) arpa, err := netutil.IPToReversedAddr(ip.AsSlice())
if err != nil { if err != nil {
return "", fmt.Errorf("reversing ip: %w", err) return "", fmt.Errorf("reversing ip: %w", err)
} }
@ -335,7 +325,7 @@ func (s *Server) Exchange(ip net.IP) (host string, err error) {
} }
var resolver *proxy.Proxy var resolver *proxy.Proxy
if s.privateNets.Contains(ip) { if s.isPrivateIP(ip) {
if !s.conf.UsePrivateRDNS { if !s.conf.UsePrivateRDNS {
return "", nil return "", nil
} }
@ -350,8 +340,12 @@ func (s *Server) Exchange(ip net.IP) (host string, err error) {
return "", err return "", err
} }
return hostFromPTR(ctx.Res)
}
// hostFromPTR returns domain name from the PTR response or error.
func hostFromPTR(resp *dns.Msg) (host string, err error) {
// Distinguish between NODATA response and a failed request. // Distinguish between NODATA response and a failed request.
resp := ctx.Res
if resp.Rcode != dns.RcodeSuccess && resp.Rcode != dns.RcodeNameError { if resp.Rcode != dns.RcodeSuccess && resp.Rcode != dns.RcodeNameError {
return "", fmt.Errorf( return "", fmt.Errorf(
"received %s response: %w", "received %s response: %w",
@ -370,12 +364,25 @@ func (s *Server) Exchange(ip net.IP) (host string, err error) {
return "", ErrRDNSNoData return "", ErrRDNSNoData
} }
// ResolvesPrivatePTR implements the RDNSExchanger interface for *Server. // isPrivateIP returns true if the ip is private.
func (s *Server) ResolvesPrivatePTR() (ok bool) { func (s *Server) isPrivateIP(ip netip.Addr) (ok bool) {
return s.privateNets.Contains(ip.AsSlice())
}
// ShouldResolveClient returns false if ip is a loopback address, or ip is
// private and resolving of private addresses is disabled.
func (s *Server) ShouldResolveClient(ip netip.Addr) (ok bool) {
if ip.IsLoopback() {
return false
}
isPrivate := s.isPrivateIP(ip)
s.serverLock.RLock() s.serverLock.RLock()
defer s.serverLock.RUnlock() defer s.serverLock.RUnlock()
return s.conf.UsePrivateRDNS return s.conf.ResolveClients &&
(s.conf.UsePrivateRDNS || !isPrivate)
} }
// Start starts the DNS server. // Start starts the DNS server.

View file

@ -1273,11 +1273,11 @@ func TestServer_Exchange(t *testing.T) {
) )
var ( var (
onesIP = net.IP{1, 1, 1, 1} onesIP = netip.MustParseAddr("1.1.1.1")
localIP = net.IP{192, 168, 1, 1} localIP = netip.MustParseAddr("192.168.1.1")
) )
revExtIPv4, err := netutil.IPToReversedAddr(onesIP) revExtIPv4, err := netutil.IPToReversedAddr(onesIP.AsSlice())
require.NoError(t, err) require.NoError(t, err)
extUpstream := &aghtest.UpstreamMock{ extUpstream := &aghtest.UpstreamMock{
@ -1290,7 +1290,7 @@ func TestServer_Exchange(t *testing.T) {
}, },
} }
revLocIPv4, err := netutil.IPToReversedAddr(localIP) revLocIPv4, err := netutil.IPToReversedAddr(localIP.AsSlice())
require.NoError(t, err) require.NoError(t, err)
locUpstream := &aghtest.UpstreamMock{ locUpstream := &aghtest.UpstreamMock{
@ -1330,7 +1330,7 @@ func TestServer_Exchange(t *testing.T) {
want string want string
wantErr error wantErr error
locUpstream upstream.Upstream locUpstream upstream.Upstream
req net.IP req netip.Addr
}{{ }{{
name: "external_good", name: "external_good",
want: onesHost, want: onesHost,
@ -1354,7 +1354,7 @@ func TestServer_Exchange(t *testing.T) {
want: "", want: "",
wantErr: ErrRDNSNoData, wantErr: ErrRDNSNoData,
locUpstream: locUpstream, locUpstream: locUpstream,
req: net.IP{192, 168, 1, 2}, req: netip.MustParseAddr("192.168.1.2"),
}, { }, {
name: "invalid_answer", name: "invalid_answer",
want: "", want: "",
@ -1396,3 +1396,57 @@ func TestServer_Exchange(t *testing.T) {
assert.Empty(t, host) assert.Empty(t, host)
}) })
} }
func TestServer_ShouldResolveClient(t *testing.T) {
srv := &Server{
privateNets: netutil.SubnetSetFunc(netutil.IsLocallyServed),
}
testCases := []struct {
ip netip.Addr
want require.BoolAssertionFunc
name string
resolve bool
usePrivate bool
}{{
name: "default",
ip: netip.MustParseAddr("1.1.1.1"),
want: require.True,
resolve: true,
usePrivate: true,
}, {
name: "no_rdns",
ip: netip.MustParseAddr("1.1.1.1"),
want: require.False,
resolve: false,
usePrivate: true,
}, {
name: "loopback",
ip: netip.MustParseAddr("127.0.0.1"),
want: require.False,
resolve: true,
usePrivate: true,
}, {
name: "private_resolve",
ip: netip.MustParseAddr("192.168.0.1"),
want: require.True,
resolve: true,
usePrivate: true,
}, {
name: "private_no_resolve",
ip: netip.MustParseAddr("192.168.0.1"),
want: require.False,
resolve: true,
usePrivate: false,
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
srv.conf.ResolveClients = tc.resolve
srv.conf.UsePrivateRDNS = tc.usePrivate
ok := srv.ShouldResolveClient(tc.ip)
tc.want(t, ok)
})
}
}

View file

@ -21,6 +21,8 @@ func TestHandleDNSRequest_filterDNSResponse(t *testing.T) {
||cname.specific^$dnstype=~CNAME ||cname.specific^$dnstype=~CNAME
||0.0.0.1^$dnstype=~A ||0.0.0.1^$dnstype=~A
||::1^$dnstype=~AAAA ||::1^$dnstype=~AAAA
0.0.0.0 duplicate.domain
0.0.0.0 duplicate.domain
` `
forwardConf := ServerConfig{ forwardConf := ServerConfig{
@ -137,6 +139,17 @@ func TestHandleDNSRequest_filterDNSResponse(t *testing.T) {
}, },
A: netutil.IPv4Zero(), A: netutil.IPv4Zero(),
}}, }},
}, {
req: createTestMessage("duplicate.domain."),
name: "duplicate_domain",
wantAns: []dns.RR{&dns.A{
Hdr: dns.RR_Header{
Name: "duplicate.domain.",
Rrtype: dns.TypeA,
Class: dns.ClassINET,
},
A: netutil.IPv4Zero(),
}},
}} }}
for _, tc := range testCases { for _, tc := range testCases {

View file

@ -26,11 +26,25 @@ func (s *Server) makeResponse(req *dns.Msg) (resp *dns.Msg) {
return resp return resp
} }
// ipsFromRules extracts non-IP addresses from the filtering result rules. // containsIP returns true if the IP is already in the list.
func containsIP(ips []net.IP, ip net.IP) bool {
for _, a := range ips {
if a.Equal(ip) {
return true
}
}
return false
}
// ipsFromRules extracts unique non-IP addresses from the filtering result
// rules.
func ipsFromRules(resRules []*filtering.ResultRule) (ips []net.IP) { func ipsFromRules(resRules []*filtering.ResultRule) (ips []net.IP) {
for _, r := range resRules { for _, r := range resRules {
if r.IP != nil { // len(resRules) and len(ips) are actually small enough for O(n^2) to do
ips = append(ips, r.IP) // not raise performance questions.
if ip := r.IP; ip != nil && !containsIP(ips, ip) {
ips = append(ips, ip)
} }
} }

View file

@ -2,9 +2,9 @@ package dnsforward
import ( import (
"net" "net"
"strings"
"time" "time"
"github.com/AdguardTeam/AdGuardHome/internal/aghnet"
"github.com/AdguardTeam/AdGuardHome/internal/filtering" "github.com/AdguardTeam/AdGuardHome/internal/filtering"
"github.com/AdguardTeam/AdGuardHome/internal/querylog" "github.com/AdguardTeam/AdGuardHome/internal/querylog"
"github.com/AdguardTeam/AdGuardHome/internal/stats" "github.com/AdguardTeam/AdGuardHome/internal/stats"
@ -24,7 +24,7 @@ func (s *Server) processQueryLogsAndStats(dctx *dnsContext) (rc resultCode) {
pctx := dctx.proxyCtx pctx := dctx.proxyCtx
q := pctx.Req.Question[0] q := pctx.Req.Question[0]
host := strings.ToLower(strings.TrimSuffix(q.Name, ".")) host := aghnet.NormalizeDomain(q.Name)
ip, _ := netutil.IPAndPortFromAddr(pctx.Addr) ip, _ := netutil.IPAndPortFromAddr(pctx.Addr)
ip = slices.Clone(ip) ip = slices.Clone(ip)
@ -139,11 +139,10 @@ func (s *Server) updateStats(
clientIP string, clientIP string,
) { ) {
pctx := ctx.proxyCtx pctx := ctx.proxyCtx
e := stats.Entry{} e := stats.Entry{
e.Domain = strings.ToLower(pctx.Req.Question[0].Name) Domain: aghnet.NormalizeDomain(pctx.Req.Question[0].Name),
if e.Domain != "." { Result: stats.RNotFiltered,
// Remove last ".", but save the domain as is for "." queries. Time: uint32(elapsed / 1000),
e.Domain = e.Domain[:len(e.Domain)-1]
} }
if clientID := ctx.clientID; clientID != "" { if clientID := ctx.clientID; clientID != "" {
@ -152,9 +151,6 @@ func (s *Server) updateStats(
e.Client = clientIP e.Client = clientIP
} }
e.Time = uint32(elapsed / 1000)
e.Result = stats.RNotFiltered
switch res.Reason { switch res.Reason {
case filtering.FilteredSafeBrowsing: case filtering.FilteredSafeBrowsing:
e.Result = stats.RSafeBrowsing e.Result = stats.RSafeBrowsing
@ -162,7 +158,8 @@ func (s *Server) updateStats(
e.Result = stats.RParental e.Result = stats.RParental
case filtering.FilteredSafeSearch: case filtering.FilteredSafeSearch:
e.Result = stats.RSafeSearch e.Result = stats.RSafeSearch
case filtering.FilteredBlockList, case
filtering.FilteredBlockList,
filtering.FilteredInvalid, filtering.FilteredInvalid,
filtering.FilteredBlockedService: filtering.FilteredBlockedService:
e.Result = stats.RFiltered e.Result = stats.RFiltered

View file

@ -1,10 +1,7 @@
package filtering package filtering
import ( import (
"bufio"
"bytes"
"fmt" "fmt"
"hash/crc32"
"io" "io"
"net/http" "net/http"
"os" "os"
@ -14,6 +11,7 @@ import (
"time" "time"
"github.com/AdguardTeam/AdGuardHome/internal/aghalg" "github.com/AdguardTeam/AdGuardHome/internal/aghalg"
"github.com/AdguardTeam/AdGuardHome/internal/filtering/rulelist"
"github.com/AdguardTeam/golibs/errors" "github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/log" "github.com/AdguardTeam/golibs/log"
"github.com/AdguardTeam/golibs/stringutil" "github.com/AdguardTeam/golibs/stringutil"
@ -29,9 +27,9 @@ const filterDir = "filters"
// TODO(e.burkov): Use more deterministic approach. // TODO(e.burkov): Use more deterministic approach.
var nextFilterID = time.Now().Unix() var nextFilterID = time.Now().Unix()
// FilterYAML respresents a filter list in the configuration file. // FilterYAML represents a filter list in the configuration file.
// //
// TODO(e.burkov): Investigate if the field oredering is important. // TODO(e.burkov): Investigate if the field ordering is important.
type FilterYAML struct { type FilterYAML struct {
Enabled bool Enabled bool
URL string // URL or a file path URL string // URL or a file path
@ -213,7 +211,7 @@ func (d *DNSFilter) loadFilters(array []FilterYAML) {
err := d.load(filter) err := d.load(filter)
if err != nil { if err != nil {
log.Error("Couldn't load filter %d contents due to %s", filter.ID, err) log.Error("filtering: loading filter %d: %s", filter.ID, err)
} }
} }
} }
@ -338,7 +336,8 @@ func (d *DNSFilter) refreshFiltersArray(filters *[]FilterYAML, force bool) (int,
updateFlags = append(updateFlags, updated) updateFlags = append(updateFlags, updated)
if err != nil { if err != nil {
nfail++ nfail++
log.Printf("Failed to update filter %s: %s\n", uf.URL, err) log.Info("filtering: updating filter from url %q: %s\n", uf.URL, err)
continue continue
} }
} }
@ -367,7 +366,13 @@ func (d *DNSFilter) refreshFiltersArray(filters *[]FilterYAML, force bool) (int,
continue continue
} }
log.Info("Updated filter #%d. Rules: %d -> %d", f.ID, f.RulesCount, uf.RulesCount) log.Info(
"filtering: updated filter %d; rule count: %d (was %d)",
f.ID,
uf.RulesCount,
f.RulesCount,
)
f.Name = uf.Name f.Name = uf.Name
f.RulesCount = uf.RulesCount f.RulesCount = uf.RulesCount
f.checksum = uf.checksum f.checksum = uf.checksum
@ -397,9 +402,10 @@ func (d *DNSFilter) refreshFiltersArray(filters *[]FilterYAML, force bool) (int,
// //
// TODO(a.garipov, e.burkov): What the hell? // TODO(a.garipov, e.burkov): What the hell?
func (d *DNSFilter) refreshFiltersIntl(block, allow, force bool) (int, bool) { func (d *DNSFilter) refreshFiltersIntl(block, allow, force bool) (int, bool) {
log.Debug("filtering: updating...")
updNum := 0 updNum := 0
log.Debug("filtering: starting updating")
defer func() { log.Debug("filtering: finished updating, %d updated", updNum) }()
var lists []FilterYAML var lists []FilterYAML
var toUpd []bool var toUpd []bool
isNetErr := false isNetErr := false
@ -437,131 +443,9 @@ func (d *DNSFilter) refreshFiltersIntl(block, allow, force bool) (int, bool) {
} }
} }
log.Debug("filtering: update finished: %d lists updated", updNum)
return updNum, false return updNum, false
} }
// isPrintableText returns true if data is printable UTF-8 text with CR, LF, TAB
// characters.
//
// TODO(e.burkov): Investigate the purpose of this and improve the
// implementation. Perhaps, use something from the unicode package.
func isPrintableText(data string) (ok bool) {
for _, c := range []byte(data) {
if (c >= ' ' && c != 0x7f) || c == '\n' || c == '\r' || c == '\t' {
continue
}
return false
}
return true
}
// scanLinesWithBreak is essentially a [bufio.ScanLines] which keeps trailing
// line breaks.
func scanLinesWithBreak(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i := bytes.IndexByte(data, '\n'); i >= 0 {
return i + 1, data[0 : i+1], nil
}
if atEOF {
return len(data), data, nil
}
// Request more data.
return 0, nil, nil
}
// parseFilter copies filter's content from src to dst and returns the number of
// rules, number of bytes written, checksum, and title of the parsed list. dst
// must not be nil.
func (d *DNSFilter) parseFilter(
src io.Reader,
dst io.Writer,
) (rulesNum, written int, checksum uint32, title string, err error) {
scanner := bufio.NewScanner(src)
scanner.Split(scanLinesWithBreak)
titleFound := false
for n := 0; scanner.Scan(); written += n {
line := scanner.Text()
var isRule bool
var likelyTitle string
isRule, likelyTitle, err = d.parseFilterLine(line, !titleFound, written == 0)
if err != nil {
return 0, written, 0, "", err
}
if isRule {
rulesNum++
} else if likelyTitle != "" {
title, titleFound = likelyTitle, true
}
checksum = crc32.Update(checksum, crc32.IEEETable, []byte(line))
n, err = dst.Write([]byte(line))
if err != nil {
return 0, written, 0, "", fmt.Errorf("writing filter line: %w", err)
}
}
if err = scanner.Err(); err != nil {
return 0, written, 0, "", fmt.Errorf("scanning filter contents: %w", err)
}
return rulesNum, written, checksum, title, nil
}
// parseFilterLine returns true if the passed line is a rule. line is
// considered a rule if it's not a comment and contains no title.
func (d *DNSFilter) parseFilterLine(
line string,
lookForTitle bool,
testHTML bool,
) (isRule bool, title string, err error) {
if !isPrintableText(line) {
return false, "", errors.Error("filter contains non-printable characters")
}
line = strings.TrimSpace(line)
if line == "" || line[0] == '#' {
return false, "", nil
}
if testHTML && isHTML(line) {
return false, "", errors.Error("data is HTML, not plain text")
}
if line[0] == '!' && lookForTitle {
match := d.filterTitleRegexp.FindStringSubmatch(line)
if len(match) > 1 {
title = match[1]
}
return false, title, nil
}
return true, "", nil
}
// isHTML returns true if the line contains HTML tags instead of plain text.
// line shouldn have no leading space symbols.
//
// TODO(ameshkov): It actually gives too much false-positives. Perhaps, just
// check if trimmed string begins with angle bracket.
func isHTML(line string) (ok bool) {
line = strings.ToLower(line)
return strings.HasPrefix(line, "<html") || strings.HasPrefix(line, "<!doctype")
}
// update refreshes filter's content and a/mtimes of it's file. // update refreshes filter's content and a/mtimes of it's file.
func (d *DNSFilter) update(filter *FilterYAML) (b bool, err error) { func (d *DNSFilter) update(filter *FilterYAML) (b bool, err error) {
b, err = d.updateIntl(filter) b, err = d.updateIntl(filter)
@ -573,7 +457,7 @@ func (d *DNSFilter) update(filter *FilterYAML) (b bool, err error) {
filter.LastUpdated, filter.LastUpdated,
) )
if chErr != nil { if chErr != nil {
log.Error("os.Chtimes(): %v", chErr) log.Error("filtering: os.Chtimes(): %s", chErr)
} }
} }
@ -582,14 +466,12 @@ func (d *DNSFilter) update(filter *FilterYAML) (b bool, err error) {
// finalizeUpdate closes and gets rid of temporary file f with filter's content // finalizeUpdate closes and gets rid of temporary file f with filter's content
// according to updated. It also saves new values of flt's name, rules number // according to updated. It also saves new values of flt's name, rules number
// and checksum if sucсeeded. // and checksum if succeeded.
func (d *DNSFilter) finalizeUpdate( func (d *DNSFilter) finalizeUpdate(
file *os.File, file *os.File,
flt *FilterYAML, flt *FilterYAML,
updated bool, updated bool,
name string, res *rulelist.ParseResult,
rnum int,
cs uint32,
) (err error) { ) (err error) {
tmpFileName := file.Name() tmpFileName := file.Name()
@ -602,23 +484,24 @@ func (d *DNSFilter) finalizeUpdate(
} }
if !updated { if !updated {
log.Tracef("filter #%d from %s has no changes, skip", flt.ID, flt.URL) log.Debug("filtering: filter %d from url %q has no changes, skipping", flt.ID, flt.URL)
return os.Remove(tmpFileName) return os.Remove(tmpFileName)
} }
fltPath := flt.Path(d.DataDir) fltPath := flt.Path(d.DataDir)
log.Printf("saving contents of filter #%d into %s", flt.ID, fltPath) log.Info("filtering: saving contents of filter %d into %q", flt.ID, fltPath)
// Don't use renamio or maybe packages, since those will require loading the // Don't use renameio or maybe packages, since those will require loading
// whole filter content to the memory on Windows. // the whole filter content to the memory on Windows.
err = os.Rename(tmpFileName, fltPath) err = os.Rename(tmpFileName, fltPath)
if err != nil { if err != nil {
return errors.WithDeferred(err, os.Remove(tmpFileName)) return errors.WithDeferred(err, os.Remove(tmpFileName))
} }
flt.Name, flt.checksum, flt.RulesCount = aghalg.Coalesce(flt.Name, name), cs, rnum flt.Name = aghalg.Coalesce(flt.Name, res.Title)
flt.checksum, flt.RulesCount = res.Checksum, res.RulesCount
return nil return nil
} }
@ -626,11 +509,9 @@ func (d *DNSFilter) finalizeUpdate(
// updateIntl updates the flt rewriting it's actual file. It returns true if // updateIntl updates the flt rewriting it's actual file. It returns true if
// the actual update has been performed. // the actual update has been performed.
func (d *DNSFilter) updateIntl(flt *FilterYAML) (ok bool, err error) { func (d *DNSFilter) updateIntl(flt *FilterYAML) (ok bool, err error) {
log.Tracef("downloading update for filter %d from %s", flt.ID, flt.URL) log.Debug("filtering: downloading update for filter %d from %q", flt.ID, flt.URL)
var name string var res *rulelist.ParseResult
var rnum, n int
var cs uint32
var tmpFile *os.File var tmpFile *os.File
tmpFile, err = os.CreateTemp(filepath.Join(d.DataDir, filterDir), "") tmpFile, err = os.CreateTemp(filepath.Join(d.DataDir, filterDir), "")
@ -638,9 +519,14 @@ func (d *DNSFilter) updateIntl(flt *FilterYAML) (ok bool, err error) {
return false, err return false, err
} }
defer func() { defer func() {
finErr := d.finalizeUpdate(tmpFile, flt, ok, name, rnum, cs) finErr := d.finalizeUpdate(tmpFile, flt, ok, res)
if ok && finErr == nil { if ok && finErr == nil {
log.Printf("updated filter %d: %d bytes, %d rules", flt.ID, n, rnum) log.Info(
"filtering: updated filter %d: %d bytes, %d rules",
flt.ID,
res.BytesWritten,
res.RulesCount,
)
return return
} }
@ -661,14 +547,14 @@ func (d *DNSFilter) updateIntl(flt *FilterYAML) (ok bool, err error) {
var resp *http.Response var resp *http.Response
resp, err = d.HTTPClient.Get(flt.URL) resp, err = d.HTTPClient.Get(flt.URL)
if err != nil { if err != nil {
log.Printf("requesting filter from %s, skip: %s", flt.URL, err) log.Info("filtering: requesting filter from %q: %s, skipping", flt.URL, err)
return false, err return false, err
} }
defer func() { err = errors.WithDeferred(err, resp.Body.Close()) }() defer func() { err = errors.WithDeferred(err, resp.Body.Close()) }()
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
log.Printf("got status code %d from %s, skip", resp.StatusCode, flt.URL) log.Info("filtering got status code %d from %q, skipping", resp.StatusCode, flt.URL)
return false, fmt.Errorf("got status code %d, want %d", resp.StatusCode, http.StatusOK) return false, fmt.Errorf("got status code %d, want %d", resp.StatusCode, http.StatusOK)
} }
@ -685,16 +571,20 @@ func (d *DNSFilter) updateIntl(flt *FilterYAML) (ok bool, err error) {
r = f r = f
} }
rnum, n, cs, name, err = d.parseFilter(r, tmpFile) bufPtr := d.bufPool.Get().(*[]byte)
defer d.bufPool.Put(bufPtr)
return cs != flt.checksum && err == nil, err p := rulelist.NewParser()
res, err = p.Parse(tmpFile, r, *bufPtr)
return res.Checksum != flt.checksum && err == nil, err
} }
// loads filter contents from the file in dataDir // loads filter contents from the file in dataDir
func (d *DNSFilter) load(flt *FilterYAML) (err error) { func (d *DNSFilter) load(flt *FilterYAML) (err error) {
fileName := flt.Path(d.DataDir) fileName := flt.Path(d.DataDir)
log.Debug("filtering: loading filter %d from %s", flt.ID, fileName) log.Debug("filtering: loading filter %d from %q", flt.ID, fileName)
file, err := os.Open(fileName) file, err := os.Open(fileName)
if errors.Is(err, os.ErrNotExist) { if errors.Is(err, os.ErrNotExist) {
@ -710,14 +600,18 @@ func (d *DNSFilter) load(flt *FilterYAML) (err error) {
return fmt.Errorf("getting filter file stat: %w", err) return fmt.Errorf("getting filter file stat: %w", err)
} }
log.Debug("filtering: file %s, id %d, length %d", fileName, flt.ID, st.Size()) log.Debug("filtering: file %q, id %d, length %d", fileName, flt.ID, st.Size())
rulesCount, _, checksum, _, err := d.parseFilter(file, io.Discard) bufPtr := d.bufPool.Get().(*[]byte)
defer d.bufPool.Put(bufPtr)
p := rulelist.NewParser()
res, err := p.Parse(io.Discard, file, *bufPtr)
if err != nil { if err != nil {
return fmt.Errorf("parsing filter file: %w", err) return fmt.Errorf("parsing filter file: %w", err)
} }
flt.RulesCount, flt.checksum, flt.LastUpdated = rulesCount, checksum, st.ModTime() flt.RulesCount, flt.checksum, flt.LastUpdated = res.RulesCount, res.Checksum, st.ModTime()
return nil return nil
} }
@ -759,8 +653,9 @@ func (d *DNSFilter) enableFiltersLocked(async bool) {
}) })
} }
if err := d.SetFilters(filters, allowFilters, async); err != nil { err := d.setFilters(filters, allowFilters, async)
log.Debug("enabling filters: %s", err) if err != nil {
log.Error("filtering: enabling filters: %s", err)
} }
d.SetEnabled(d.FilteringEnabled) d.SetEnabled(d.FilteringEnabled)

View file

@ -9,7 +9,6 @@ import (
"net/http" "net/http"
"os" "os"
"path/filepath" "path/filepath"
"regexp"
"runtime" "runtime"
"runtime/debug" "runtime/debug"
"strings" "strings"
@ -18,6 +17,7 @@ import (
"github.com/AdguardTeam/AdGuardHome/internal/aghhttp" "github.com/AdguardTeam/AdGuardHome/internal/aghhttp"
"github.com/AdguardTeam/AdGuardHome/internal/aghnet" "github.com/AdguardTeam/AdGuardHome/internal/aghnet"
"github.com/AdguardTeam/AdGuardHome/internal/filtering/rulelist"
"github.com/AdguardTeam/golibs/errors" "github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/log" "github.com/AdguardTeam/golibs/log"
"github.com/AdguardTeam/golibs/mathutil" "github.com/AdguardTeam/golibs/mathutil"
@ -170,6 +170,15 @@ type Checker interface {
// DNSFilter matches hostnames and DNS requests against filtering rules. // DNSFilter matches hostnames and DNS requests against filtering rules.
type DNSFilter struct { type DNSFilter struct {
// bufPool is a pool of buffers used for filtering-rule list parsing.
bufPool *sync.Pool
rulesStorage *filterlist.RuleStorage
filteringEngine *urlfilter.DNSEngine
rulesStorageAllow *filterlist.RuleStorage
filteringEngineAllow *urlfilter.DNSEngine
safeSearch SafeSearch safeSearch SafeSearch
// safeBrowsingChecker is the safe browsing hash-prefix checker. // safeBrowsingChecker is the safe browsing hash-prefix checker.
@ -178,12 +187,6 @@ type DNSFilter struct {
// parentalControl is the parental control hash-prefix checker. // parentalControl is the parental control hash-prefix checker.
parentalControlChecker Checker parentalControlChecker Checker
rulesStorage *filterlist.RuleStorage
filteringEngine *urlfilter.DNSEngine
rulesStorageAllow *filterlist.RuleStorage
filteringEngineAllow *urlfilter.DNSEngine
engineLock sync.RWMutex engineLock sync.RWMutex
Config // for direct access by library users, even a = assignment Config // for direct access by library users, even a = assignment
@ -196,12 +199,6 @@ type DNSFilter struct {
refreshLock *sync.Mutex refreshLock *sync.Mutex
// filterTitleRegexp is the regular expression to retrieve a name of a
// filter list.
//
// TODO(e.burkov): Don't use regexp for such a simple text processing task.
filterTitleRegexp *regexp.Regexp
hostCheckers []hostChecker hostCheckers []hostChecker
} }
@ -339,12 +336,12 @@ func cloneRewrites(entries []*LegacyRewrite) (clone []*LegacyRewrite) {
return clone return clone
} }
// SetFilters sets new filters, synchronously or asynchronously. When filters // setFilters sets new filters, synchronously or asynchronously. When filters
// are set asynchronously, the old filters continue working until the new // are set asynchronously, the old filters continue working until the new
// filters are ready. // filters are ready.
// //
// In this case the caller must ensure that the old filter files are intact. // In this case the caller must ensure that the old filter files are intact.
func (d *DNSFilter) SetFilters(blockFilters, allowFilters []Filter, async bool) error { func (d *DNSFilter) setFilters(blockFilters, allowFilters []Filter, async bool) error {
if async { if async {
params := filtersInitializerParams{ params := filtersInitializerParams{
allowFilters: allowFilters, allowFilters: allowFilters,
@ -370,14 +367,7 @@ func (d *DNSFilter) SetFilters(blockFilters, allowFilters []Filter, async bool)
return nil return nil
} }
err := d.initFiltering(allowFilters, blockFilters) return d.initFiltering(allowFilters, blockFilters)
if err != nil {
log.Error("filtering: can't initialize filtering subsystem: %s", err)
return err
}
return nil
} }
// Starts initializing new filters by signal from channel // Starts initializing new filters by signal from channel
@ -386,7 +376,8 @@ func (d *DNSFilter) filtersInitializer() {
params := <-d.filtersInitializerChan params := <-d.filtersInitializerChan
err := d.initFiltering(params.allowFilters, params.blockFilters) err := d.initFiltering(params.allowFilters, params.blockFilters)
if err != nil { if err != nil {
log.Error("Can't initialize filtering subsystem: %s", err) log.Error("filtering: initializing: %s", err)
continue continue
} }
} }
@ -718,7 +709,7 @@ func newRuleStorage(filters []Filter) (rs *filterlist.RuleStorage, err error) {
} }
// Initialize urlfilter objects. // Initialize urlfilter objects.
func (d *DNSFilter) initFiltering(allowFilters, blockFilters []Filter) error { func (d *DNSFilter) initFiltering(allowFilters, blockFilters []Filter) (err error) {
rulesStorage, err := newRuleStorage(blockFilters) rulesStorage, err := newRuleStorage(blockFilters)
if err != nil { if err != nil {
return err return err
@ -745,7 +736,8 @@ func (d *DNSFilter) initFiltering(allowFilters, blockFilters []Filter) error {
// Make sure that the OS reclaims memory as soon as possible. // Make sure that the OS reclaims memory as soon as possible.
debug.FreeOSMemory() debug.FreeOSMemory()
log.Debug("initialized filtering engine")
log.Debug("filtering: initialized filtering engine")
return nil return nil
} }
@ -949,8 +941,14 @@ func InitModule() {
// be non-nil. // be non-nil.
func New(c *Config, blockFilters []Filter) (d *DNSFilter, err error) { func New(c *Config, blockFilters []Filter) (d *DNSFilter, err error) {
d = &DNSFilter{ d = &DNSFilter{
bufPool: &sync.Pool{
New: func() (buf any) {
bufVal := make([]byte, rulelist.MaxRuleLen)
return &bufVal
},
},
refreshLock: &sync.Mutex{}, refreshLock: &sync.Mutex{},
filterTitleRegexp: regexp.MustCompile(`^! Title: +(.*)$`),
safeBrowsingChecker: c.SafeBrowsingChecker, safeBrowsingChecker: c.SafeBrowsingChecker,
parentalControlChecker: c.ParentalControlChecker, parentalControlChecker: c.ParentalControlChecker,
} }
@ -1047,7 +1045,7 @@ func (d *DNSFilter) checkSafeBrowsing(
if log.GetLevel() >= log.DEBUG { if log.GetLevel() >= log.DEBUG {
timer := log.StartTimer() timer := log.StartTimer()
defer timer.LogElapsed("safebrowsing lookup for %q", host) defer timer.LogElapsed("filtering: safebrowsing lookup for %q", host)
} }
res = Result{ res = Result{
@ -1079,7 +1077,7 @@ func (d *DNSFilter) checkParental(
if log.GetLevel() >= log.DEBUG { if log.GetLevel() >= log.DEBUG {
timer := log.StartTimer() timer := log.StartTimer()
defer timer.LogElapsed("parental lookup for %q", host) defer timer.LogElapsed("filtering: parental lookup for %q", host)
} }
res = Result{ res = Result{

View file

@ -547,7 +547,7 @@ func TestWhitelist(t *testing.T) {
}} }}
d, setts := newForTest(t, nil, filters) d, setts := newForTest(t, nil, filters)
err := d.SetFilters(filters, whiteFilters, false) err := d.setFilters(filters, whiteFilters, false)
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(d.Close) t.Cleanup(d.Close)

View file

@ -25,7 +25,7 @@ func toCacheItem(data []byte) *cacheItem {
t := time.Unix(int64(binary.BigEndian.Uint64(data)), 0) t := time.Unix(int64(binary.BigEndian.Uint64(data)), 0)
data = data[expirySize:] data = data[expirySize:]
hashes := make([]hostnameHash, len(data)/hashSize) hashes := make([]hostnameHash, 0, len(data)/hashSize)
for i := 0; i < len(data); i += hashSize { for i := 0; i < len(data); i += hashSize {
var hash hostnameHash var hash hostnameHash
@ -41,12 +41,13 @@ func toCacheItem(data []byte) *cacheItem {
// fromCacheItem encodes cacheItem into data. // fromCacheItem encodes cacheItem into data.
func fromCacheItem(item *cacheItem) (data []byte) { func fromCacheItem(item *cacheItem) (data []byte) {
data = make([]byte, len(item.hashes)*hashSize+expirySize) data = make([]byte, 0, len(item.hashes)*hashSize+expirySize)
expiry := item.expiry.Unix() expiry := item.expiry.Unix()
binary.BigEndian.PutUint64(data[:expirySize], uint64(expiry)) data = binary.BigEndian.AppendUint64(data, uint64(expiry))
for _, v := range item.hashes { for _, v := range item.hashes {
// nolint:looppointer // The subsilce is used for a copy. // nolint:looppointer // The subslice of v is used for a copy.
data = append(data, v[:]...) data = append(data, v[:]...)
} }
@ -62,7 +63,7 @@ func (c *Checker) findInCache(
i := 0 i := 0
for _, hash := range hashes { for _, hash := range hashes {
// nolint:looppointer // The subsilce is used for a safe cache lookup. // nolint:looppointer // The has subslice is used for a cache lookup.
data := c.cache.Get(hash[:prefixLen]) data := c.cache.Get(hash[:prefixLen])
if data == nil { if data == nil {
hashes[i] = hash hashes[i] = hash
@ -97,34 +98,36 @@ func (c *Checker) storeInCache(hashesToRequest, respHashes []hostnameHash) {
for _, hash := range respHashes { for _, hash := range respHashes {
var pref prefix var pref prefix
// nolint:looppointer // The subsilce is used for a copy. // nolint:looppointer // The hash subslice is used for a copy.
copy(pref[:], hash[:]) copy(pref[:], hash[:])
hashToStore[pref] = append(hashToStore[pref], hash) hashToStore[pref] = append(hashToStore[pref], hash)
} }
for pref, hash := range hashToStore { for pref, hash := range hashToStore {
// nolint:looppointer // The subsilce is used for a safe cache lookup. c.setCache(pref, hash)
c.setCache(pref[:], hash)
} }
for _, hash := range hashesToRequest { for _, hash := range hashesToRequest {
// nolint:looppointer // The subsilce is used for a safe cache lookup. // nolint:looppointer // The hash subslice is used for a cache lookup.
pref := hash[:prefixLen] val := c.cache.Get(hash[:prefixLen])
val := c.cache.Get(pref)
if val == nil { if val == nil {
var pref prefix
// nolint:looppointer // The hash subslice is used for a copy.
copy(pref[:], hash[:])
c.setCache(pref, nil) c.setCache(pref, nil)
} }
} }
} }
// setCache stores hash in cache. // setCache stores hash in cache.
func (c *Checker) setCache(pref []byte, hashes []hostnameHash) { func (c *Checker) setCache(pref prefix, hashes []hostnameHash) {
item := &cacheItem{ item := &cacheItem{
expiry: time.Now().Add(c.cacheTime), expiry: time.Now().Add(c.cacheTime),
hashes: hashes, hashes: hashes,
} }
c.cache.Set(pref, fromCacheItem(item)) c.cache.Set(pref[:], fromCacheItem(item))
log.Debug("%s: stored in cache: %v", c.svc, pref) log.Debug("%s: stored in cache: %v", c.svc, pref)
} }

View file

@ -0,0 +1,44 @@
package hashprefix
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestCacheItem(t *testing.T) {
item := &cacheItem{
expiry: time.Unix(0x01_23_45_67_89_AB_CD_EF, 0),
hashes: []hostnameHash{{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
}, {
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
}},
}
wantData := []byte{
0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
0x01, 0x03, 0x05, 0x07, 0x02, 0x04, 0x06, 0x08,
}
gotData := fromCacheItem(item)
assert.Equal(t, wantData, gotData)
newItem := toCacheItem(gotData)
gotData = fromCacheItem(newItem)
assert.Equal(t, wantData, gotData)
}

View file

@ -173,7 +173,7 @@ func (c *Checker) getQuestion(hashes []hostnameHash) (q string) {
b := &strings.Builder{} b := &strings.Builder{}
for _, hash := range hashes { for _, hash := range hashes {
// nolint:looppointer // The subsilce is used for safe hex encoding. // nolint:looppointer // The hash subslice is used for hex encoding.
stringutil.WriteToBuilder(b, hex.EncodeToString(hash[:prefixLen]), ".") stringutil.WriteToBuilder(b, hex.EncodeToString(hash[:prefixLen]), ".")
} }

View file

@ -95,7 +95,7 @@ func (d *DNSFilter) handleFilteringAddURL(w http.ResponseWriter, r *http.Request
r, r,
w, w,
http.StatusBadRequest, http.StatusBadRequest,
"Couldn't fetch filter from url %s: %s", "Couldn't fetch filter from URL %q: %s",
filt.URL, filt.URL,
err, err,
) )

View file

@ -122,7 +122,7 @@ func matchDomainWildcard(host, wildcard string) (ok bool) {
return isWildcard(wildcard) && strings.HasSuffix(host, wildcard[1:]) return isWildcard(wildcard) && strings.HasSuffix(host, wildcard[1:])
} }
// legacyRewriteSortsBefore sorts rewirtes according to the following priority: // legacyRewriteSortsBefore sorts rewrites according to the following priority:
// //
// 1. A and AAAA > CNAME; // 1. A and AAAA > CNAME;
// 2. wildcard > exact; // 2. wildcard > exact;

View file

@ -0,0 +1,9 @@
package rulelist
import "github.com/AdguardTeam/golibs/errors"
// ErrHTML is returned by [Parser.Parse] if the data is likely to be HTML.
//
// TODO(a.garipov): This error is currently returned to the UI. Stop that and
// make it all-lowercase.
const ErrHTML errors.Error = "data is HTML, not plain text"

View file

@ -0,0 +1,184 @@
package rulelist
import (
"bufio"
"bytes"
"fmt"
"hash/crc32"
"io"
"unicode"
"github.com/AdguardTeam/golibs/errors"
)
// Parser is a filtering-rule parser that collects data, such as the checksum
// and the title, as well as counts rules and removes comments.
type Parser struct {
title string
rulesCount int
written int
checksum uint32
titleFound bool
}
// NewParser returns a new filtering-rule parser.
func NewParser() (p *Parser) {
return &Parser{}
}
// ParseResult contains information about the results of parsing a
// filtering-rule list by [Parser.Parse].
type ParseResult struct {
// Title is the title contained within the filtering-rule list, if any.
Title string
// RulesCount is the number of rules in the list. It excludes empty lines
// and comments.
RulesCount int
// BytesWritten is the number of bytes written to dst.
BytesWritten int
// Checksum is the CRC-32 checksum of the rules content. That is, excluding
// empty lines and comments.
Checksum uint32
}
// Parse parses data from src into dst using buf during parsing. r is never
// nil.
func (p *Parser) Parse(dst io.Writer, src io.Reader, buf []byte) (r *ParseResult, err error) {
s := bufio.NewScanner(src)
s.Buffer(buf, MaxRuleLen)
lineIdx := 0
for s.Scan() {
var n int
n, err = p.processLine(dst, s.Bytes(), lineIdx)
p.written += n
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return p.result(), err
}
lineIdx++
}
r = p.result()
err = s.Err()
return r, errors.Annotate(err, "scanning filter contents: %w")
}
// result returns the current parsing result.
func (p *Parser) result() (r *ParseResult) {
return &ParseResult{
Title: p.title,
RulesCount: p.rulesCount,
BytesWritten: p.written,
Checksum: p.checksum,
}
}
// processLine processes a single line. It may write to dst, and if it does, n
// is the number of bytes written.
func (p *Parser) processLine(dst io.Writer, line []byte, lineIdx int) (n int, err error) {
trimmed := bytes.TrimSpace(line)
if p.written == 0 && isHTMLLine(trimmed) {
return 0, ErrHTML
}
badIdx, isRule := 0, false
if p.titleFound {
badIdx, isRule = parseLine(trimmed)
} else {
badIdx, isRule = p.parseLineTitle(trimmed)
}
if badIdx != -1 {
return 0, fmt.Errorf(
"line at index %d: character at index %d: non-printable character",
lineIdx,
badIdx+bytes.Index(line, trimmed),
)
}
if !isRule {
return 0, nil
}
p.rulesCount++
p.checksum = crc32.Update(p.checksum, crc32.IEEETable, trimmed)
// Assume that there is generally enough space in the buffer to add a
// newline.
n, err = dst.Write(append(trimmed, '\n'))
return n, errors.Annotate(err, "writing rule line: %w")
}
// isHTMLLine returns true if line is likely an HTML line. line is assumed to
// be trimmed of whitespace characters.
func isHTMLLine(line []byte) (isHTML bool) {
return hasPrefixFold(line, []byte("<html")) || hasPrefixFold(line, []byte("<!doctype"))
}
// hasPrefixFold is a simple, best-effort prefix matcher. It may return
// incorrect results for some non-ASCII characters.
func hasPrefixFold(b, prefix []byte) (ok bool) {
l := len(prefix)
return len(b) >= l && bytes.EqualFold(b[:l], prefix)
}
// parseLine returns true if the parsed line is a filtering rule. line is
// assumed to be trimmed of whitespace characters. nonPrintIdx is the index of
// the first non-printable character, if any; if there are none, nonPrintIdx is
// -1.
//
// A line is considered a rule if it's not empty, not a comment, and contains
// only printable characters.
func parseLine(line []byte) (nonPrintIdx int, isRule bool) {
if len(line) == 0 || line[0] == '#' || line[0] == '!' {
return -1, false
}
nonPrintIdx = bytes.IndexFunc(line, isNotPrintable)
return nonPrintIdx, nonPrintIdx == -1
}
// isNotPrintable returns true if r is not a printable character that can be
// contained in a filtering rule.
func isNotPrintable(r rune) (ok bool) {
// Tab isn't included into Unicode's graphic symbols, so include it here
// explicitly.
return r != '\t' && !unicode.IsGraphic(r)
}
// parseLineTitle is like [parseLine] but additionally looks for a title. line
// is assumed to be trimmed of whitespace characters.
func (p *Parser) parseLineTitle(line []byte) (nonPrintIdx int, isRule bool) {
if len(line) == 0 || line[0] == '#' {
return -1, false
}
if line[0] != '!' {
nonPrintIdx = bytes.IndexFunc(line, isNotPrintable)
return nonPrintIdx, nonPrintIdx == -1
}
const titlePattern = "! Title: "
if !bytes.HasPrefix(line, []byte(titlePattern)) {
return -1, false
}
title := bytes.TrimSpace(line[len(titlePattern):])
if title != nil {
// Note that title can be a non-nil empty slice. Consider that normal
// and just stop looking for other titles.
p.title = string(title)
p.titleFound = true
}
return -1, false
}

View file

@ -0,0 +1,247 @@
package rulelist_test
import (
"bufio"
"bytes"
"strings"
"testing"
"github.com/AdguardTeam/AdGuardHome/internal/aghtest"
"github.com/AdguardTeam/AdGuardHome/internal/filtering/rulelist"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestParser_Parse(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
in string
wantDst string
wantErrMsg string
wantTitle string
wantRulesNum int
wantWritten int
}{{
name: "empty",
in: "",
wantDst: "",
wantErrMsg: "",
wantTitle: "",
wantRulesNum: 0,
wantWritten: 0,
}, {
name: "html",
in: testRuleTextHTML,
wantErrMsg: rulelist.ErrHTML.Error(),
wantTitle: "",
wantRulesNum: 0,
wantWritten: 0,
}, {
name: "comments",
in: "# Comment 1\n" +
"! Comment 2\n",
wantErrMsg: "",
wantTitle: "",
wantRulesNum: 0,
wantWritten: 0,
}, {}, {
name: "rule",
in: testRuleTextBlocked,
wantDst: testRuleTextBlocked,
wantErrMsg: "",
wantRulesNum: 1,
wantTitle: "",
wantWritten: len(testRuleTextBlocked),
}, {
name: "html_in_rule",
in: testRuleTextBlocked + testRuleTextHTML,
wantDst: testRuleTextBlocked + testRuleTextHTML,
wantErrMsg: "",
wantTitle: "",
wantRulesNum: 2,
wantWritten: len(testRuleTextBlocked) + len(testRuleTextHTML),
}, {
name: "title",
in: "! Title: Test Title \n" +
"! Title: Bad, Ignored Title\n" +
testRuleTextBlocked,
wantDst: testRuleTextBlocked,
wantErrMsg: "",
wantTitle: "Test Title",
wantRulesNum: 1,
wantWritten: len(testRuleTextBlocked),
}, {
name: "bad_char",
in: "! Title: Test Title \n" +
testRuleTextBlocked +
">>>\x7F<<<",
wantDst: testRuleTextBlocked,
wantErrMsg: "line at index 2: " +
"character at index 3: " +
"non-printable character",
wantTitle: "Test Title",
wantRulesNum: 1,
wantWritten: len(testRuleTextBlocked),
}, {
name: "too_long",
in: strings.Repeat("a", rulelist.MaxRuleLen+1),
wantDst: "",
wantErrMsg: "scanning filter contents: " + bufio.ErrTooLong.Error(),
wantTitle: "",
wantRulesNum: 0,
wantWritten: 0,
}, {
name: "bad_tab_and_comment",
in: testRuleTextBadTab,
wantDst: testRuleTextBadTab,
wantErrMsg: "",
wantTitle: "",
wantRulesNum: 1,
wantWritten: len(testRuleTextBadTab),
}, {
name: "etc_hosts_tab_and_comment",
in: testRuleTextEtcHostsTab,
wantDst: testRuleTextEtcHostsTab,
wantErrMsg: "",
wantTitle: "",
wantRulesNum: 1,
wantWritten: len(testRuleTextEtcHostsTab),
}}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
dst := &bytes.Buffer{}
buf := make([]byte, rulelist.MaxRuleLen)
p := rulelist.NewParser()
r, err := p.Parse(dst, strings.NewReader(tc.in), buf)
require.NotNil(t, r)
testutil.AssertErrorMsg(t, tc.wantErrMsg, err)
assert.Equal(t, tc.wantDst, dst.String())
assert.Equal(t, tc.wantTitle, r.Title)
assert.Equal(t, tc.wantRulesNum, r.RulesCount)
assert.Equal(t, tc.wantWritten, r.BytesWritten)
if tc.wantWritten > 0 {
assert.NotZero(t, r.Checksum)
}
})
}
}
func TestParser_Parse_writeError(t *testing.T) {
t.Parallel()
dst := &aghtest.Writer{
OnWrite: func(b []byte) (n int, err error) {
return 1, errors.Error("test error")
},
}
buf := make([]byte, rulelist.MaxRuleLen)
p := rulelist.NewParser()
r, err := p.Parse(dst, strings.NewReader(testRuleTextBlocked), buf)
require.NotNil(t, r)
testutil.AssertErrorMsg(t, "writing rule line: test error", err)
assert.Equal(t, 1, r.BytesWritten)
}
func TestParser_Parse_checksums(t *testing.T) {
t.Parallel()
const (
withoutComments = testRuleTextBlocked
withComments = "! Some comment.\n" +
" " + testRuleTextBlocked +
"# Another comment.\n"
)
buf := make([]byte, rulelist.MaxRuleLen)
p := rulelist.NewParser()
r, err := p.Parse(&bytes.Buffer{}, strings.NewReader(withoutComments), buf)
require.NotNil(t, r)
require.NoError(t, err)
gotWithoutComments := r.Checksum
p = rulelist.NewParser()
r, err = p.Parse(&bytes.Buffer{}, strings.NewReader(withComments), buf)
require.NotNil(t, r)
require.NoError(t, err)
gotWithComments := r.Checksum
assert.Equal(t, gotWithoutComments, gotWithComments)
}
var (
resSink *rulelist.ParseResult
errSink error
)
func BenchmarkParser_Parse(b *testing.B) {
dst := &bytes.Buffer{}
src := strings.NewReader(strings.Repeat(testRuleTextBlocked, 1000))
buf := make([]byte, rulelist.MaxRuleLen)
p := rulelist.NewParser()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
resSink, errSink = p.Parse(dst, src, buf)
dst.Reset()
}
require.NoError(b, errSink)
require.NotNil(b, resSink)
}
func FuzzParser_Parse(f *testing.F) {
const n = 64
testCases := []string{
"",
"# Comment",
"! Comment",
"! Title ",
"! Title XXX",
testRuleTextEtcHostsTab,
testRuleTextHTML,
testRuleTextBlocked,
testRuleTextBadTab,
"1.2.3.4",
"1.2.3.4 etc-hosts.example",
">>>\x00<<<",
">>>\x7F<<<",
strings.Repeat("a", n+1),
}
for _, tc := range testCases {
f.Add(tc)
}
buf := make([]byte, n)
f.Fuzz(func(t *testing.T, input string) {
require.Eventually(t, func() (ok bool) {
dst := &bytes.Buffer{}
src := strings.NewReader(input)
p := rulelist.NewParser()
r, _ := p.Parse(dst, src, buf)
require.NotNil(t, r)
return true
}, testTimeout, testTimeout/100)
})
}

View file

@ -0,0 +1,11 @@
// Package rulelist contains the implementation of the standard rule-list
// filter that wraps an urlfilter filtering-engine.
//
// TODO(a.garipov): Expand.
package rulelist
// MaxRuleLen is the maximum length of a line with a filtering rule, in bytes.
//
// TODO(a.garipov): Consider changing this to a rune length, like AdGuardDNS
// does.
const MaxRuleLen = 1024

View file

@ -0,0 +1,14 @@
package rulelist_test
import "time"
// testTimeout is the common timeout for tests.
const testTimeout = 1 * time.Second
// Common texts for tests.
const (
testRuleTextHTML = "<!DOCTYPE html>\n"
testRuleTextBlocked = "||blocked.example^\n"
testRuleTextBadTab = "||bad-tab-and-comment.example^\t# A comment.\n"
testRuleTextEtcHostsTab = "0.0.0.0 tab..example^\t# A comment.\n"
)

View file

@ -1505,6 +1505,7 @@ var blockedServices = []blockedService{{
"||aus.social^", "||aus.social^",
"||awscommunity.social^", "||awscommunity.social^",
"||climatejustice.social^", "||climatejustice.social^",
"||cupoftea.social^",
"||cyberplace.social^", "||cyberplace.social^",
"||defcon.social^", "||defcon.social^",
"||det.social^", "||det.social^",
@ -1530,6 +1531,7 @@ var blockedServices = []blockedService{{
"||masto.pt^", "||masto.pt^",
"||mastodon.au^", "||mastodon.au^",
"||mastodon.bida.im^", "||mastodon.bida.im^",
"||mastodon.com.tr^",
"||mastodon.eus^", "||mastodon.eus^",
"||mastodon.green^", "||mastodon.green^",
"||mastodon.ie^", "||mastodon.ie^",
@ -1551,11 +1553,11 @@ var blockedServices = []blockedService{{
"||mastodont.cat^", "||mastodont.cat^",
"||mastodontech.de^", "||mastodontech.de^",
"||mastodontti.fi^", "||mastodontti.fi^",
"||mastouille.fr^",
"||mathstodon.xyz^", "||mathstodon.xyz^",
"||metalhead.club^", "||metalhead.club^",
"||mindly.social^", "||mindly.social^",
"||mstdn.ca^", "||mstdn.ca^",
"||mstdn.jp^",
"||mstdn.party^", "||mstdn.party^",
"||mstdn.plus^", "||mstdn.plus^",
"||mstdn.social^", "||mstdn.social^",
@ -1567,7 +1569,6 @@ var blockedServices = []blockedService{{
"||nrw.social^", "||nrw.social^",
"||o3o.ca^", "||o3o.ca^",
"||ohai.social^", "||ohai.social^",
"||pewtix.com^",
"||piaille.fr^", "||piaille.fr^",
"||pol.social^", "||pol.social^",
"||ravenation.club^", "||ravenation.club^",
@ -1582,20 +1583,19 @@ var blockedServices = []blockedService{{
"||social.linux.pizza^", "||social.linux.pizza^",
"||social.politicaconciencia.org^", "||social.politicaconciencia.org^",
"||social.vivaldi.net^", "||social.vivaldi.net^",
"||sself.co^",
"||stranger.social^", "||stranger.social^",
"||sueden.social^", "||sueden.social^",
"||tech.lgbt^", "||tech.lgbt^",
"||techhub.social^", "||techhub.social^",
"||theblower.au^", "||theblower.au^",
"||tkz.one^", "||tkz.one^",
"||todon.eu^",
"||toot.aquilenet.fr^", "||toot.aquilenet.fr^",
"||toot.community^", "||toot.community^",
"||toot.funami.tech^", "||toot.funami.tech^",
"||toot.io^", "||toot.io^",
"||toot.wales^", "||toot.wales^",
"||troet.cafe^", "||troet.cafe^",
"||twingyeo.kr^",
"||union.place^", "||union.place^",
"||universeodon.com^", "||universeodon.com^",
"||urbanists.social^", "||urbanists.social^",

View file

@ -30,32 +30,30 @@ import (
const dataDir = "data" const dataDir = "data"
// logSettings are the logging settings part of the configuration file. // logSettings are the logging settings part of the configuration file.
//
// TODO(a.garipov): Put them into a separate object.
type logSettings struct { type logSettings struct {
// File is the path to the log file. If empty, logs are written to stdout. // File is the path to the log file. If empty, logs are written to stdout.
// If "syslog", logs are written to syslog. // If "syslog", logs are written to syslog.
File string `yaml:"log_file"` File string `yaml:"file"`
// MaxBackups is the maximum number of old log files to retain. // MaxBackups is the maximum number of old log files to retain.
// //
// NOTE: MaxAge may still cause them to get deleted. // NOTE: MaxAge may still cause them to get deleted.
MaxBackups int `yaml:"log_max_backups"` MaxBackups int `yaml:"max_backups"`
// MaxSize is the maximum size of the log file before it gets rotated, in // MaxSize is the maximum size of the log file before it gets rotated, in
// megabytes. The default value is 100 MB. // megabytes. The default value is 100 MB.
MaxSize int `yaml:"log_max_size"` MaxSize int `yaml:"max_size"`
// MaxAge is the maximum duration for retaining old log files, in days. // MaxAge is the maximum duration for retaining old log files, in days.
MaxAge int `yaml:"log_max_age"` MaxAge int `yaml:"max_age"`
// Compress determines, if the rotated log files should be compressed using // Compress determines, if the rotated log files should be compressed using
// gzip. // gzip.
Compress bool `yaml:"log_compress"` Compress bool `yaml:"compress"`
// LocalTime determines, if the time used for formatting the timestamps in // LocalTime determines, if the time used for formatting the timestamps in
// is the computer's local time. // is the computer's local time.
LocalTime bool `yaml:"log_localtime"` LocalTime bool `yaml:"local_time"`
// Verbose determines, if verbose (aka debug) logging is enabled. // Verbose determines, if verbose (aka debug) logging is enabled.
Verbose bool `yaml:"verbose"` Verbose bool `yaml:"verbose"`
@ -142,7 +140,8 @@ type configuration struct {
// Keep this field sorted to ensure consistent ordering. // Keep this field sorted to ensure consistent ordering.
Clients *clientsConfig `yaml:"clients"` Clients *clientsConfig `yaml:"clients"`
logSettings `yaml:",inline"` // Log is a block with log configuration settings.
Log logSettings `yaml:"log"`
OSConfig *osConfig `yaml:"os"` OSConfig *osConfig `yaml:"os"`
@ -241,6 +240,7 @@ type tlsConfigSettings struct {
type queryLogConfig struct { type queryLogConfig struct {
// Ignored is the list of host names, which should not be written to log. // Ignored is the list of host names, which should not be written to log.
// "." is considered to be the root domain.
Ignored []string `yaml:"ignored"` Ignored []string `yaml:"ignored"`
// Interval is the interval for query log's files rotation. // Interval is the interval for query log's files rotation.
@ -390,7 +390,7 @@ var config = &configuration{
HostsFile: true, HostsFile: true,
}, },
}, },
logSettings: logSettings{ Log: logSettings{
Compress: false, Compress: false,
LocalTime: false, LocalTime: false,
MaxBackups: 0, MaxBackups: 0,
@ -421,19 +421,19 @@ func (c *configuration) getConfigFilename() string {
// separate method in order to configure logger before the actual configuration // separate method in order to configure logger before the actual configuration
// is parsed and applied. // is parsed and applied.
func readLogSettings() (ls *logSettings) { func readLogSettings() (ls *logSettings) {
ls = &logSettings{} conf := &configuration{}
yamlFile, err := readConfigFile() yamlFile, err := readConfigFile()
if err != nil { if err != nil {
return ls return &logSettings{}
} }
err = yaml.Unmarshal(yamlFile, ls) err = yaml.Unmarshal(yamlFile, conf)
if err != nil { if err != nil {
log.Error("Couldn't get logging settings from the configuration: %s", err) log.Error("Couldn't get logging settings from the configuration: %s", err)
} }
return ls return &conf.Log
} }
// validateBindHosts returns error if any of binding hosts from configuration is // validateBindHosts returns error if any of binding hosts from configuration is

View file

@ -17,6 +17,7 @@ import (
"github.com/AdguardTeam/AdGuardHome/internal/dnsforward" "github.com/AdguardTeam/AdGuardHome/internal/dnsforward"
"github.com/AdguardTeam/AdGuardHome/internal/filtering" "github.com/AdguardTeam/AdGuardHome/internal/filtering"
"github.com/AdguardTeam/AdGuardHome/internal/querylog" "github.com/AdguardTeam/AdGuardHome/internal/querylog"
"github.com/AdguardTeam/AdGuardHome/internal/rdns"
"github.com/AdguardTeam/AdGuardHome/internal/stats" "github.com/AdguardTeam/AdGuardHome/internal/stats"
"github.com/AdguardTeam/AdGuardHome/internal/whois" "github.com/AdguardTeam/AdGuardHome/internal/whois"
"github.com/AdguardTeam/dnsproxy/proxy" "github.com/AdguardTeam/dnsproxy/proxy"
@ -167,30 +168,77 @@ func initDNSServer(
return fmt.Errorf("dnsServer.Prepare: %w", err) return fmt.Errorf("dnsServer.Prepare: %w", err)
} }
if config.Clients.Sources.RDNS { initRDNS()
Context.rdns = NewRDNS(Context.dnsServer, &Context.clients, config.DNS.UsePrivateRDNS)
}
initWHOIS() initWHOIS()
return nil return nil
} }
const (
// defaultQueueSize is the size of queue of IPs for rDNS and WHOIS
// processing.
defaultQueueSize = 255
// defaultCacheSize is the maximum size of the cache for rDNS and WHOIS
// processing. It must be greater than zero.
defaultCacheSize = 10_000
// defaultIPTTL is the Time to Live duration for IP addresses cached by
// rDNS and WHOIS.
defaultIPTTL = 1 * time.Hour
)
// initRDNS initializes the rDNS.
func initRDNS() {
Context.rdnsCh = make(chan netip.Addr, defaultQueueSize)
// TODO(s.chzhen): Add ability to disable it on dns server configuration
// update in [dnsforward] package.
r := rdns.New(&rdns.Config{
Exchanger: Context.dnsServer,
CacheSize: defaultCacheSize,
CacheTTL: defaultIPTTL,
})
go processRDNS(r)
}
// processRDNS processes reverse DNS lookup queries. It is intended to be used
// as a goroutine.
func processRDNS(r rdns.Interface) {
defer log.OnPanic("rdns")
for ip := range Context.rdnsCh {
ok := Context.dnsServer.ShouldResolveClient(ip)
if !ok {
continue
}
host, changed := r.Process(ip)
if host == "" || !changed {
continue
}
ok = Context.clients.AddHost(ip, host, ClientSourceRDNS)
if ok {
continue
}
log.Debug(
"dns: can't set rdns info for client %q: already set with higher priority source",
ip,
)
}
}
// initWHOIS initializes the WHOIS. // initWHOIS initializes the WHOIS.
// //
// TODO(s.chzhen): Consider making configurable. // TODO(s.chzhen): Consider making configurable.
func initWHOIS() { func initWHOIS() {
const ( const (
// defaultQueueSize is the size of queue of IPs for WHOIS processing.
defaultQueueSize = 255
// defaultTimeout is the timeout for WHOIS requests. // defaultTimeout is the timeout for WHOIS requests.
defaultTimeout = 5 * time.Second defaultTimeout = 5 * time.Second
// defaultCacheSize is the maximum size of the cache. If it's zero,
// cache size is unlimited.
defaultCacheSize = 10_000
// defaultMaxConnReadSize is an upper limit in bytes for reading from // defaultMaxConnReadSize is an upper limit in bytes for reading from
// net.Conn. // net.Conn.
defaultMaxConnReadSize = 64 * 1024 defaultMaxConnReadSize = 64 * 1024
@ -200,9 +248,6 @@ func initWHOIS() {
// defaultMaxInfoLen is the maximum length of whois.Info fields. // defaultMaxInfoLen is the maximum length of whois.Info fields.
defaultMaxInfoLen = 250 defaultMaxInfoLen = 250
// defaultIPTTL is the Time to Live duration for cached IP addresses.
defaultIPTTL = 1 * time.Hour
) )
Context.whoisCh = make(chan netip.Addr, defaultQueueSize) Context.whoisCh = make(chan netip.Addr, defaultQueueSize)
@ -274,11 +319,7 @@ func onDNSRequest(pctx *proxy.DNSContext) {
return return
} }
srcs := config.Clients.Sources Context.rdnsCh <- ip
if srcs.RDNS && !ip.IsLoopback() {
Context.rdns.Begin(ip)
}
Context.whoisCh <- ip Context.whoisCh <- ip
} }
@ -517,11 +558,7 @@ func startDNSServer() error {
const topClientsNumber = 100 // the number of clients to get const topClientsNumber = 100 // the number of clients to get
for _, ip := range Context.stats.TopClientsIP(topClientsNumber) { for _, ip := range Context.stats.TopClientsIP(topClientsNumber) {
srcs := config.Clients.Sources Context.rdnsCh <- ip
if srcs.RDNS && !ip.IsLoopback() {
Context.rdns.Begin(ip)
}
Context.whoisCh <- ip Context.whoisCh <- ip
} }

View file

@ -56,7 +56,6 @@ type homeContext struct {
stats stats.Interface // statistics module stats stats.Interface // statistics module
queryLog querylog.QueryLog // query log module queryLog querylog.QueryLog // query log module
dnsServer *dnsforward.Server // DNS module dnsServer *dnsforward.Server // DNS module
rdns *RDNS // rDNS module
dhcpServer dhcpd.Interface // DHCP module dhcpServer dhcpd.Interface // DHCP module
auth *Auth // HTTP authentication module auth *Auth // HTTP authentication module
filters *filtering.DNSFilter // DNS filtering module filters *filtering.DNSFilter // DNS filtering module
@ -83,6 +82,9 @@ type homeContext struct {
client *http.Client client *http.Client
appSignalChannel chan os.Signal // Channel for receiving OS signals by the console app appSignalChannel chan os.Signal // Channel for receiving OS signals by the console app
// rdnsCh is the channel for receiving IPs for rDNS processing.
rdnsCh chan netip.Addr
// whoisCh is the channel for receiving IPs for WHOIS processing. // whoisCh is the channel for receiving IPs for WHOIS processing.
whoisCh chan netip.Addr whoisCh chan netip.Addr
@ -468,7 +470,7 @@ func setupDNSFilteringConf(conf *filtering.Config) (err error) {
ServiceName: pcService, ServiceName: pcService,
TXTSuffix: pcTXTSuffix, TXTSuffix: pcTXTSuffix,
CacheTime: cacheTime, CacheTime: cacheTime,
CacheSize: conf.SafeBrowsingCacheSize, CacheSize: conf.ParentalCacheSize,
}) })
conf.SafeSearchConf.CustomResolver = safeSearchResolver{} conf.SafeSearchConf.CustomResolver = safeSearchResolver{}
@ -829,20 +831,21 @@ func configureLogger(opts options) (err error) {
// getLogSettings returns a log settings object properly initialized from opts. // getLogSettings returns a log settings object properly initialized from opts.
func getLogSettings(opts options) (ls *logSettings) { func getLogSettings(opts options) (ls *logSettings) {
ls = readLogSettings() ls = readLogSettings()
configLogSettings := config.Log
// Command-line arguments can override config settings. // Command-line arguments can override config settings.
if opts.verbose || config.Verbose { if opts.verbose || configLogSettings.Verbose {
ls.Verbose = true ls.Verbose = true
} }
ls.File = stringutil.Coalesce(opts.logFile, config.File, ls.File) ls.File = stringutil.Coalesce(opts.logFile, configLogSettings.File, ls.File)
// Handle default log settings overrides. // Handle default log settings overrides.
ls.Compress = config.Compress ls.Compress = configLogSettings.Compress
ls.LocalTime = config.LocalTime ls.LocalTime = configLogSettings.LocalTime
ls.MaxBackups = config.MaxBackups ls.MaxBackups = configLogSettings.MaxBackups
ls.MaxSize = config.MaxSize ls.MaxSize = configLogSettings.MaxSize
ls.MaxAge = config.MaxAge ls.MaxAge = configLogSettings.MaxAge
if opts.runningAsService && ls.File == "" && runtime.GOOS == "windows" { if opts.runningAsService && ls.File == "" && runtime.GOOS == "windows" {
// When running as a Windows service, use eventlog by default if // When running as a Windows service, use eventlog by default if

View file

@ -1,143 +0,0 @@
package home
import (
"encoding/binary"
"net/netip"
"sync/atomic"
"time"
"github.com/AdguardTeam/AdGuardHome/internal/dnsforward"
"github.com/AdguardTeam/golibs/cache"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/log"
)
// RDNS resolves clients' addresses to enrich their metadata.
type RDNS struct {
exchanger dnsforward.RDNSExchanger
clients *clientsContainer
// ipCh used to pass client's IP to rDNS workerLoop.
ipCh chan netip.Addr
// ipCache caches the IP addresses to be resolved by rDNS. The resolved
// address stays here while it's inside clients. After leaving clients the
// address will be resolved once again. If the address couldn't be
// resolved, cache prevents further attempts to resolve it for some time.
ipCache cache.Cache
// usePrivate stores the state of current private reverse-DNS resolving
// settings.
usePrivate atomic.Bool
}
// Default AdGuard Home reverse DNS values.
const (
revDNSCacheSize = 10000
// TODO(e.burkov): Make these values configurable.
revDNSCacheTTL = 24 * 60 * 60
revDNSFailureCacheTTL = 1 * 60 * 60
revDNSQueueSize = 256
)
// NewRDNS creates and returns initialized RDNS.
func NewRDNS(
exchanger dnsforward.RDNSExchanger,
clients *clientsContainer,
usePrivate bool,
) (rDNS *RDNS) {
rDNS = &RDNS{
exchanger: exchanger,
clients: clients,
ipCache: cache.New(cache.Config{
EnableLRU: true,
MaxCount: revDNSCacheSize,
}),
ipCh: make(chan netip.Addr, revDNSQueueSize),
}
rDNS.usePrivate.Store(usePrivate)
go rDNS.workerLoop()
return rDNS
}
// ensurePrivateCache ensures that the state of the RDNS cache is consistent
// with the current private client RDNS resolving settings.
//
// TODO(e.burkov): Clearing cache each time this value changed is not a perfect
// approach since only unresolved locally-served addresses should be removed.
// Implement when improving the cache.
func (r *RDNS) ensurePrivateCache() {
usePrivate := r.exchanger.ResolvesPrivatePTR()
if r.usePrivate.CompareAndSwap(!usePrivate, usePrivate) {
r.ipCache.Clear()
}
}
// isCached returns true if ip is already cached and not expired yet. It also
// caches it otherwise.
func (r *RDNS) isCached(ip netip.Addr) (ok bool) {
ipBytes := ip.AsSlice()
now := uint64(time.Now().Unix())
if expire := r.ipCache.Get(ipBytes); len(expire) != 0 {
return binary.BigEndian.Uint64(expire) > now
}
return false
}
// cache caches the ip address for ttl seconds.
func (r *RDNS) cache(ip netip.Addr, ttl uint64) {
ipData := ip.AsSlice()
ttlData := [8]byte{}
binary.BigEndian.PutUint64(ttlData[:], uint64(time.Now().Unix())+ttl)
r.ipCache.Set(ipData, ttlData[:])
}
// Begin adds the ip to the resolving queue if it is not cached or already
// resolved.
func (r *RDNS) Begin(ip netip.Addr) {
r.ensurePrivateCache()
if r.isCached(ip) || r.clients.clientSource(ip) > ClientSourceRDNS {
return
}
select {
case r.ipCh <- ip:
log.Debug("rdns: %q added to queue", ip)
default:
log.Debug("rdns: queue is full")
}
}
// workerLoop handles incoming IP addresses from ipChan and adds it into
// clients.
func (r *RDNS) workerLoop() {
defer log.OnPanic("rdns")
for ip := range r.ipCh {
ttl := uint64(revDNSCacheTTL)
host, err := r.exchanger.Exchange(ip.AsSlice())
if err != nil {
log.Debug("rdns: resolving %q: %s", ip, err)
if errors.Is(err, dnsforward.ErrRDNSFailed) {
// Cache failure for a less time.
ttl = revDNSFailureCacheTTL
}
}
r.cache(ip, ttl)
if host != "" {
_ = r.clients.AddHost(ip, host, ClientSourceRDNS)
}
}
}

View file

@ -1,264 +0,0 @@
package home
import (
"bytes"
"encoding/binary"
"fmt"
"net"
"net/netip"
"sync"
"testing"
"time"
"github.com/AdguardTeam/AdGuardHome/internal/aghalg"
"github.com/AdguardTeam/AdGuardHome/internal/aghtest"
"github.com/AdguardTeam/dnsproxy/upstream"
"github.com/AdguardTeam/golibs/cache"
"github.com/AdguardTeam/golibs/log"
"github.com/AdguardTeam/golibs/netutil"
"github.com/AdguardTeam/golibs/stringutil"
"github.com/miekg/dns"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRDNS_Begin(t *testing.T) {
aghtest.ReplaceLogLevel(t, log.DEBUG)
w := &bytes.Buffer{}
aghtest.ReplaceLogWriter(t, w)
ip1234, ip1235 := netip.MustParseAddr("1.2.3.4"), netip.MustParseAddr("1.2.3.5")
testCases := []struct {
cliIDIndex map[string]*Client
customChan chan netip.Addr
name string
wantLog string
ip netip.Addr
wantCacheHit int
wantCacheMiss int
}{{
cliIDIndex: map[string]*Client{},
customChan: nil,
name: "cached",
wantLog: "",
ip: ip1234,
wantCacheHit: 1,
wantCacheMiss: 0,
}, {
cliIDIndex: map[string]*Client{},
customChan: nil,
name: "not_cached",
wantLog: "rdns: queue is full",
ip: ip1235,
wantCacheHit: 0,
wantCacheMiss: 1,
}, {
cliIDIndex: map[string]*Client{"1.2.3.5": {}},
customChan: nil,
name: "already_in_clients",
wantLog: "",
ip: ip1235,
wantCacheHit: 0,
wantCacheMiss: 1,
}, {
cliIDIndex: map[string]*Client{},
customChan: make(chan netip.Addr, 1),
name: "add_to_queue",
wantLog: `rdns: "1.2.3.5" added to queue`,
ip: ip1235,
wantCacheHit: 0,
wantCacheMiss: 1,
}}
for _, tc := range testCases {
w.Reset()
ipCache := cache.New(cache.Config{
EnableLRU: true,
MaxCount: revDNSCacheSize,
})
ttl := make([]byte, binary.Size(uint64(0)))
binary.BigEndian.PutUint64(ttl, uint64(time.Now().Add(100*time.Hour).Unix()))
rdns := &RDNS{
ipCache: ipCache,
exchanger: &rDNSExchanger{
ex: aghtest.NewErrorUpstream(),
},
clients: &clientsContainer{
list: map[string]*Client{},
idIndex: tc.cliIDIndex,
ipToRC: map[netip.Addr]*RuntimeClient{},
allTags: stringutil.NewSet(),
},
}
ipCache.Clear()
ipCache.Set(net.IP{1, 2, 3, 4}, ttl)
if tc.customChan != nil {
rdns.ipCh = tc.customChan
defer close(tc.customChan)
}
t.Run(tc.name, func(t *testing.T) {
rdns.Begin(tc.ip)
assert.Equal(t, tc.wantCacheHit, ipCache.Stats().Hit)
assert.Equal(t, tc.wantCacheMiss, ipCache.Stats().Miss)
assert.Contains(t, w.String(), tc.wantLog)
})
}
}
// rDNSExchanger is a mock dnsforward.RDNSExchanger implementation for tests.
type rDNSExchanger struct {
ex upstream.Upstream
usePrivate bool
}
// Exchange implements dnsforward.RDNSExchanger interface for *RDNSExchanger.
func (e *rDNSExchanger) Exchange(ip net.IP) (host string, err error) {
rev, err := netutil.IPToReversedAddr(ip)
if err != nil {
return "", fmt.Errorf("reversing ip: %w", err)
}
req := &dns.Msg{
Question: []dns.Question{{
Name: dns.Fqdn(rev),
Qclass: dns.ClassINET,
Qtype: dns.TypePTR,
}},
}
resp, err := e.ex.Exchange(req)
if err != nil {
return "", err
}
if len(resp.Answer) == 0 {
return "", nil
}
return resp.Answer[0].Header().Name, nil
}
// Exchange implements dnsforward.RDNSExchanger interface for *RDNSExchanger.
func (e *rDNSExchanger) ResolvesPrivatePTR() (ok bool) {
return e.usePrivate
}
func TestRDNS_ensurePrivateCache(t *testing.T) {
data := []byte{1, 2, 3, 4}
ipCache := cache.New(cache.Config{
EnableLRU: true,
MaxCount: revDNSCacheSize,
})
ex := &rDNSExchanger{
ex: aghtest.NewErrorUpstream(),
}
rdns := &RDNS{
ipCache: ipCache,
exchanger: ex,
}
rdns.ipCache.Set(data, data)
require.NotZero(t, rdns.ipCache.Stats().Count)
ex.usePrivate = !ex.usePrivate
rdns.ensurePrivateCache()
require.Zero(t, rdns.ipCache.Stats().Count)
}
func TestRDNS_WorkerLoop(t *testing.T) {
aghtest.ReplaceLogLevel(t, log.DEBUG)
w := &bytes.Buffer{}
aghtest.ReplaceLogWriter(t, w)
localIP := netip.MustParseAddr("192.168.1.1")
revIPv4, err := netutil.IPToReversedAddr(localIP.AsSlice())
require.NoError(t, err)
revIPv6, err := netutil.IPToReversedAddr(net.ParseIP("2a00:1450:400c:c06::93"))
require.NoError(t, err)
locUpstream := &aghtest.UpstreamMock{
OnAddress: func() (addr string) { return "local.upstream.example" },
OnExchange: func(req *dns.Msg) (resp *dns.Msg, err error) {
return aghalg.Coalesce(
aghtest.MatchedResponse(req, dns.TypePTR, revIPv4, "local.domain"),
aghtest.MatchedResponse(req, dns.TypePTR, revIPv6, "ipv6.domain"),
new(dns.Msg).SetRcode(req, dns.RcodeNameError),
), nil
},
}
errUpstream := aghtest.NewErrorUpstream()
testCases := []struct {
ups upstream.Upstream
cliIP netip.Addr
wantLog string
name string
wantClientSource clientSource
}{{
ups: locUpstream,
cliIP: localIP,
wantLog: "",
name: "all_good",
wantClientSource: ClientSourceRDNS,
}, {
ups: errUpstream,
cliIP: netip.MustParseAddr("192.168.1.2"),
wantLog: `rdns: resolving "192.168.1.2": test upstream error`,
name: "resolve_error",
wantClientSource: ClientSourceNone,
}, {
ups: locUpstream,
cliIP: netip.MustParseAddr("2a00:1450:400c:c06::93"),
wantLog: "",
name: "ipv6_good",
wantClientSource: ClientSourceRDNS,
}}
for _, tc := range testCases {
w.Reset()
cc := newClientsContainer(t)
ch := make(chan netip.Addr)
rdns := &RDNS{
exchanger: &rDNSExchanger{
ex: tc.ups,
},
clients: cc,
ipCh: ch,
ipCache: cache.New(cache.Config{
EnableLRU: true,
MaxCount: revDNSCacheSize,
}),
}
t.Run(tc.name, func(t *testing.T) {
var wg sync.WaitGroup
wg.Add(1)
go func() {
rdns.workerLoop()
wg.Done()
}()
ch <- tc.cliIP
close(ch)
wg.Wait()
if tc.wantLog != "" {
assert.Contains(t, w.String(), tc.wantLog)
}
assert.Equal(t, tc.wantClientSource, cc.clientSource(tc.cliIP))
})
}
}

View file

@ -23,7 +23,7 @@ import (
) )
// currentSchemaVersion is the current schema version. // currentSchemaVersion is the current schema version.
const currentSchemaVersion = 23 const currentSchemaVersion = 24
// These aliases are provided for convenience. // These aliases are provided for convenience.
type ( type (
@ -98,6 +98,7 @@ func upgradeConfigSchema(oldVersion int, diskConf yobj) (err error) {
upgradeSchema20to21, upgradeSchema20to21,
upgradeSchema21to22, upgradeSchema21to22,
upgradeSchema22to23, upgradeSchema22to23,
upgradeSchema23to24,
} }
n := 0 n := 0
@ -1325,6 +1326,110 @@ func upgradeSchema22to23(diskConf yobj) (err error) {
return nil return nil
} }
// upgradeSchema23to24 performs the following changes:
//
// # BEFORE:
// 'log_file': ""
// 'log_max_backups': 0
// 'log_max_size': 100
// 'log_max_age': 3
// 'log_compress': false
// 'log_localtime': false
// 'verbose': false
//
// # AFTER:
// 'log':
// 'file': ""
// 'max_backups': 0
// 'max_size': 100
// 'max_age': 3
// 'compress': false
// 'local_time': false
// 'verbose': false
func upgradeSchema23to24(diskConf yobj) (err error) {
log.Printf("Upgrade yaml: 23 to 24")
diskConf["schema_version"] = 24
logObj := yobj{}
err = coalesceError(
moveField[string](diskConf, logObj, "log_file", "file"),
moveField[int](diskConf, logObj, "log_max_backups", "max_backups"),
moveField[int](diskConf, logObj, "log_max_size", "max_size"),
moveField[int](diskConf, logObj, "log_max_age", "max_age"),
moveField[bool](diskConf, logObj, "log_compress", "compress"),
moveField[bool](diskConf, logObj, "log_localtime", "local_time"),
moveField[bool](diskConf, logObj, "verbose", "verbose"),
)
if err != nil {
// Don't wrap the error, because it's informative enough as is.
return err
}
if len(logObj) != 0 {
diskConf["log"] = logObj
}
delete(diskConf, "log_file")
delete(diskConf, "log_max_backups")
delete(diskConf, "log_max_size")
delete(diskConf, "log_max_age")
delete(diskConf, "log_compress")
delete(diskConf, "log_localtime")
delete(diskConf, "verbose")
return nil
}
// moveField gets field value for key from diskConf, and then set this value
// in newConf for newKey.
func moveField[T any](diskConf, newConf yobj, key, newKey string) (err error) {
ok, newVal, err := fieldValue[T](diskConf, key)
if !ok {
return err
}
switch v := newVal.(type) {
case int, bool, string:
newConf[newKey] = v
default:
return fmt.Errorf("invalid type of %s: %T", key, newVal)
}
return nil
}
// fieldValue returns the value of type T for key in diskConf object.
func fieldValue[T any](diskConf yobj, key string) (ok bool, field any, err error) {
fieldVal, ok := diskConf[key]
if !ok {
return false, new(T), nil
}
f, ok := fieldVal.(T)
if !ok {
return false, nil, fmt.Errorf("unexpected type of %s: %T", key, fieldVal)
}
return true, f, nil
}
// coalesceError returns the first non-nil error. It is named after function
// COALESCE in SQL. If all errors are nil, it returns nil.
//
// TODO(a.garipov): Consider a similar helper to group errors together to show
// as many errors as possible.
//
// TODO(a.garipov): Think of ways to merge with [aghalg.Coalesce].
func coalesceError(errors ...error) (res error) {
for _, err := range errors {
if err != nil {
return err
}
}
return nil
}
// TODO(a.garipov): Replace with log.Output when we port it to our logging // TODO(a.garipov): Replace with log.Output when we port it to our logging
// package. // package.
func funcName() string { func funcName() string {

View file

@ -1306,3 +1306,76 @@ func TestUpgradeSchema22to23(t *testing.T) {
}) })
} }
} }
func TestUpgradeSchema23to24(t *testing.T) {
const newSchemaVer = 24
testCases := []struct {
in yobj
want yobj
name string
wantErrMsg string
}{{
name: "empty",
in: yobj{},
want: yobj{
"schema_version": newSchemaVer,
},
wantErrMsg: "",
}, {
name: "ok",
in: yobj{
"log_file": "/test/path.log",
"log_max_backups": 1,
"log_max_size": 2,
"log_max_age": 3,
"log_compress": true,
"log_localtime": true,
"verbose": true,
},
want: yobj{
"log": yobj{
"file": "/test/path.log",
"max_backups": 1,
"max_size": 2,
"max_age": 3,
"compress": true,
"local_time": true,
"verbose": true,
},
"schema_version": newSchemaVer,
},
wantErrMsg: "",
}, {
name: "invalid",
in: yobj{
"log_file": "/test/path.log",
"log_max_backups": 1,
"log_max_size": 2,
"log_max_age": 3,
"log_compress": "",
"log_localtime": true,
"verbose": true,
},
want: yobj{
"log_file": "/test/path.log",
"log_max_backups": 1,
"log_max_size": 2,
"log_max_age": 3,
"log_compress": "",
"log_localtime": true,
"verbose": true,
"schema_version": newSchemaVer,
},
wantErrMsg: "unexpected type of log_compress: string",
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
err := upgradeSchema23to24(tc.in)
testutil.AssertErrorMsg(t, tc.wantErrMsg, err)
assert.Equal(t, tc.want, tc.in)
})
}
}

View file

@ -4,7 +4,6 @@ package querylog
import ( import (
"fmt" "fmt"
"os" "os"
"strings"
"sync" "sync"
"time" "time"
@ -161,10 +160,7 @@ func (l *queryLog) clear() {
// newLogEntry creates an instance of logEntry from parameters. // newLogEntry creates an instance of logEntry from parameters.
func newLogEntry(params *AddParams) (entry *logEntry) { func newLogEntry(params *AddParams) (entry *logEntry) {
q := params.Question.Question[0] q := params.Question.Question[0]
qHost := q.Name qHost := aghnet.NormalizeDomain(q.Name)
if qHost != "." {
qHost = strings.ToLower(q.Name[:len(q.Name)-1])
}
entry = &logEntry{ entry = &logEntry{
// TODO(d.kolyshev): Export this timestamp to func params. // TODO(d.kolyshev): Export this timestamp to func params.

132
internal/rdns/rdns.go Normal file
View file

@ -0,0 +1,132 @@
// Package rdns processes reverse DNS lookup queries.
package rdns
import (
"net/netip"
"time"
"github.com/AdguardTeam/golibs/errors"
"github.com/AdguardTeam/golibs/log"
"github.com/bluele/gcache"
)
// Interface processes rDNS queries.
type Interface interface {
// Process makes rDNS request and returns domain name. changed indicates
// that domain name was updated since last request.
Process(ip netip.Addr) (host string, changed bool)
}
// Empty is an empty [Inteface] implementation which does nothing.
type Empty struct{}
// type check
var _ Interface = (*Empty)(nil)
// Process implements the [Interface] interface for Empty.
func (Empty) Process(_ netip.Addr) (host string, changed bool) {
return "", false
}
// Exchanger is a resolver for clients' addresses.
type Exchanger interface {
// Exchange tries to resolve the ip in a suitable way, i.e. either as local
// or as external.
Exchange(ip netip.Addr) (host string, err error)
}
// Config is the configuration structure for Default.
type Config struct {
// Exchanger resolves IP addresses to domain names.
Exchanger Exchanger
// CacheSize is the maximum size of the cache. It must be greater than
// zero.
CacheSize int
// CacheTTL is the Time to Live duration for cached IP addresses.
CacheTTL time.Duration
}
// Default is the default rDNS query processor.
type Default struct {
// cache is the cache containing IP addresses of clients. An active IP
// address is resolved once again after it expires. If IP address couldn't
// be resolved, it stays here for some time to prevent further attempts to
// resolve the same IP.
cache gcache.Cache
// exchanger resolves IP addresses to domain names.
exchanger Exchanger
// cacheTTL is the Time to Live duration for cached IP addresses.
cacheTTL time.Duration
}
// New returns a new default rDNS query processor. conf must not be nil.
func New(conf *Config) (r *Default) {
return &Default{
cache: gcache.New(conf.CacheSize).LRU().Build(),
exchanger: conf.Exchanger,
cacheTTL: conf.CacheTTL,
}
}
// type check
var _ Interface = (*Default)(nil)
// Process implements the [Interface] interface for Default.
func (r *Default) Process(ip netip.Addr) (host string, changed bool) {
fromCache, expired := r.findInCache(ip)
if !expired {
return fromCache, false
}
host, err := r.exchanger.Exchange(ip)
if err != nil {
log.Debug("rdns: resolving %q: %s", ip, err)
}
item := &cacheItem{
expiry: time.Now().Add(r.cacheTTL),
host: host,
}
err = r.cache.Set(ip, item)
if err != nil {
log.Debug("rdns: cache: adding item %q: %s", ip, err)
}
return host, fromCache == "" || host != fromCache
}
// findInCache finds domain name in the cache. expired is true if host is not
// valid anymore.
func (r *Default) findInCache(ip netip.Addr) (host string, expired bool) {
val, err := r.cache.Get(ip)
if err != nil {
if !errors.Is(err, gcache.KeyNotFoundError) {
log.Debug("rdns: cache: retrieving %q: %s", ip, err)
}
return "", true
}
item, ok := val.(*cacheItem)
if !ok {
log.Debug("rdns: cache: %q bad type %T", ip, val)
return "", true
}
return item.host, time.Now().After(item.expiry)
}
// cacheItem represents an item that we will store in the cache.
type cacheItem struct {
// expiry is the time when cacheItem will expire.
expiry time.Time
// host is the domain name of a runtime client.
host string
}

105
internal/rdns/rdns_test.go Normal file
View file

@ -0,0 +1,105 @@
package rdns_test
import (
"net/netip"
"testing"
"time"
"github.com/AdguardTeam/AdGuardHome/internal/rdns"
"github.com/AdguardTeam/golibs/netutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// fakeRDNSExchanger is a mock [rdns.Exchanger] implementation for tests.
type fakeRDNSExchanger struct {
OnExchange func(ip netip.Addr) (host string, err error)
}
// type check
var _ rdns.Exchanger = (*fakeRDNSExchanger)(nil)
// Exchange implements [rdns.Exchanger] interface for *fakeRDNSExchanger.
func (e *fakeRDNSExchanger) Exchange(ip netip.Addr) (host string, err error) {
return e.OnExchange(ip)
}
func TestDefault_Process(t *testing.T) {
ip1 := netip.MustParseAddr("1.2.3.4")
revAddr1, err := netutil.IPToReversedAddr(ip1.AsSlice())
require.NoError(t, err)
ip2 := netip.MustParseAddr("4.3.2.1")
revAddr2, err := netutil.IPToReversedAddr(ip2.AsSlice())
require.NoError(t, err)
localIP := netip.MustParseAddr("192.168.0.1")
localRevAddr1, err := netutil.IPToReversedAddr(localIP.AsSlice())
require.NoError(t, err)
config := &rdns.Config{
CacheSize: 100,
CacheTTL: time.Hour,
}
testCases := []struct {
name string
addr netip.Addr
want string
}{{
name: "first",
addr: ip1,
want: revAddr1,
}, {
name: "second",
addr: ip2,
want: revAddr2,
}, {
name: "empty",
addr: netip.MustParseAddr("0.0.0.0"),
want: "",
}, {
name: "private",
addr: localIP,
want: localRevAddr1,
}}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
hit := 0
onExchange := func(ip netip.Addr) (host string, err error) {
hit++
switch ip {
case ip1:
return revAddr1, nil
case ip2:
return revAddr2, nil
case localIP:
return localRevAddr1, nil
default:
return "", nil
}
}
exchanger := &fakeRDNSExchanger{
OnExchange: onExchange,
}
config.Exchanger = exchanger
r := rdns.New(config)
got, changed := r.Process(tc.addr)
require.True(t, changed)
assert.Equal(t, tc.want, got)
assert.Equal(t, 1, hit)
// From cache.
got, changed = r.Process(tc.addr)
require.False(t, changed)
assert.Equal(t, tc.want, got)
assert.Equal(t, 1, hit)
})
}
}

View file

@ -86,7 +86,7 @@ func TestHandleStatsConfig(t *testing.T) {
}, },
}, },
wantCode: http.StatusUnprocessableEntity, wantCode: http.StatusUnprocessableEntity,
wantErr: "ignored: duplicate host name \"ignor.ed\" at index 1\n", wantErr: "ignored: duplicate hostname \"ignor.ed\" at index 1\n",
}, { }, {
name: "ignored_empty", name: "ignored_empty",
body: getConfigResp{ body: getConfigResp{
@ -97,7 +97,7 @@ func TestHandleStatsConfig(t *testing.T) {
}, },
}, },
wantCode: http.StatusUnprocessableEntity, wantCode: http.StatusUnprocessableEntity,
wantErr: "ignored: host name is empty\n", wantErr: "ignored: at index 0: hostname is empty\n",
}, { }, {
name: "enabled_is_null", name: "enabled_is_null",
body: getConfigResp{ body: getConfigResp{

View file

@ -10,9 +10,10 @@ require (
github.com/kyoh86/looppointer v0.2.1 github.com/kyoh86/looppointer v0.2.1
github.com/securego/gosec/v2 v2.16.0 github.com/securego/gosec/v2 v2.16.0
github.com/uudashr/gocognit v1.0.6 github.com/uudashr/gocognit v1.0.6
golang.org/x/tools v0.10.0 golang.org/x/tools v0.11.0
golang.org/x/vuln v0.2.0 golang.org/x/vuln v0.2.0
honnef.co/go/tools v0.4.3 // TODO(a.garipov): Return to tagged releases once a new one appears.
honnef.co/go/tools v0.5.0-0.dev.0.20230709092525-bc759185c5ee
mvdan.cc/gofumpt v0.5.0 mvdan.cc/gofumpt v0.5.0
mvdan.cc/unparam v0.0.0-20230610194454-9ea02bef9868 mvdan.cc/unparam v0.0.0-20230610194454-9ea02bef9868
) )
@ -26,9 +27,9 @@ require (
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
golang.org/x/exp v0.0.0-20230321023759-10a507213a29 // indirect golang.org/x/exp v0.0.0-20230321023759-10a507213a29 // indirect
golang.org/x/exp/typeparams v0.0.0-20230626212559-97b1e661b5df // indirect golang.org/x/exp/typeparams v0.0.0-20230711023510-fffb14384f22 // indirect
golang.org/x/mod v0.11.0 // indirect golang.org/x/mod v0.12.0 // indirect
golang.org/x/sync v0.3.0 // indirect golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.9.0 // indirect golang.org/x/sys v0.10.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )

View file

@ -52,21 +52,21 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20230321023759-10a507213a29 h1:ooxPy7fPvB4kwsA2h+iBNHkAbp/4JxTSwCmvdjEYmug= golang.org/x/exp v0.0.0-20230321023759-10a507213a29 h1:ooxPy7fPvB4kwsA2h+iBNHkAbp/4JxTSwCmvdjEYmug=
golang.org/x/exp v0.0.0-20230321023759-10a507213a29/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc= golang.org/x/exp v0.0.0-20230321023759-10a507213a29/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
golang.org/x/exp/typeparams v0.0.0-20230626212559-97b1e661b5df h1:jfUqBujZx2dktJVEmZpCkyngz7MWrVv1y9kLOqFNsqw= golang.org/x/exp/typeparams v0.0.0-20230711023510-fffb14384f22 h1:e8iSCQYXZ4EB6q3kIfy2fgPFTvDbozqzRe4OuIOyrL4=
golang.org/x/exp/typeparams v0.0.0-20230626212559-97b1e661b5df/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/exp/typeparams v0.0.0-20230711023510-fffb14384f22/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY= golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.11.0 h1:bUO06HqtnRcc/7l71XBe4WcqTZ+3AH1J59zWDDwLKgU= golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.11.0 h1:Gi2tvZIJyBtO9SDr1q9h5hEQCp/4L2RQ+ar0qjx2oNU= golang.org/x/net v0.12.0 h1:cfawfvKITfUsFCeJIHJrbSxpeu/E81khclypR0GVT50=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -82,8 +82,8 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220702020025-31831981b65f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220702020025-31831981b65f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0 h1:KS/R3tvhPqvJvwcKfnBHJwwthS11LRhmM5D59eEXa0s= golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
@ -96,8 +96,8 @@ golang.org/x/tools v0.0.0-20201007032633-0806396f153e/go.mod h1:z6u4i615ZeAfBE4X
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E= golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4= golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4=
golang.org/x/tools v0.10.0 h1:tvDr/iQoUqNdohiYm0LmmKcBk+q86lb9EprIUFhHHGg= golang.org/x/tools v0.11.0 h1:EMCa6U9S2LtZXLAMoWiR/R8dAQFRqbAitmbJ2UKhoi8=
golang.org/x/tools v0.10.0/go.mod h1:UJwyiVBsOA2uwvK/e5OY3GTpDUJriEd+/YlqAwLPmyM= golang.org/x/tools v0.11.0/go.mod h1:anzJrxPjNtfgiYQYirP2CPGzGLxrH2u2QBhn6Bf3qY8=
golang.org/x/vuln v0.2.0 h1:Dlz47lW0pvPHU7tnb10S8vbMn9GnV2B6eyT7Tem5XBI= golang.org/x/vuln v0.2.0 h1:Dlz47lW0pvPHU7tnb10S8vbMn9GnV2B6eyT7Tem5XBI=
golang.org/x/vuln v0.2.0/go.mod h1:V0eyhHwaAaHrt42J9bgrN6rd12f6GU4T0Lu0ex2wDg4= golang.org/x/vuln v0.2.0/go.mod h1:V0eyhHwaAaHrt42J9bgrN6rd12f6GU4T0Lu0ex2wDg4=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -107,8 +107,8 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.4.3 h1:o/n5/K5gXqk8Gozvs2cnL0F2S1/g1vcGCAx2vETjITw= honnef.co/go/tools v0.5.0-0.dev.0.20230709092525-bc759185c5ee h1:mpyvMqtlVZTwEv78QL3S2ZDTMHMO1fgNwr2kC7+K7oU=
honnef.co/go/tools v0.4.3/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA= honnef.co/go/tools v0.5.0-0.dev.0.20230709092525-bc759185c5ee/go.mod h1:GUV+uIBCLpdf0/v6UhHHG/yzI/z6qPskBeQCjcNB96k=
mvdan.cc/gofumpt v0.5.0 h1:0EQ+Z56k8tXjj/6TQD25BFNKQXpCvT0rnansIc7Ug5E= mvdan.cc/gofumpt v0.5.0 h1:0EQ+Z56k8tXjj/6TQD25BFNKQXpCvT0rnansIc7Ug5E=
mvdan.cc/gofumpt v0.5.0/go.mod h1:HBeVDtMKRZpXyxFciAirzdKklDlGu8aAy1wEbH5Y9js= mvdan.cc/gofumpt v0.5.0/go.mod h1:HBeVDtMKRZpXyxFciAirzdKklDlGu8aAy1wEbH5Y9js=
mvdan.cc/unparam v0.0.0-20230610194454-9ea02bef9868 h1:F4Q7pXcrU9UiU1fq0ZWqSOxKjNAteRuDr7JDk7uVLRQ= mvdan.cc/unparam v0.0.0-20230610194454-9ea02bef9868 h1:F4Q7pXcrU9UiU1fq0ZWqSOxKjNAteRuDr7JDk7uVLRQ=

View file

@ -18,7 +18,7 @@ Run `make init` from the project root.
## `make/`: Makefile Scripts ## `make/`: Makefile scripts
The release channels are: `development` (the default), `edge`, `beta`, and The release channels are: `development` (the default), `edge`, `beta`, and
`release`. If verbosity levels aren't documented here, there are only two: `0`, `release`. If verbosity levels aren't documented here, there are only two: `0`,
@ -26,7 +26,7 @@ don't print anything, and `1`, be verbose.
### `build-docker.sh`: Build A Multi-Architecture Docker Image ### `build-docker.sh`: Build a multi-architecture Docker image
Required environment: Required environment:
@ -51,7 +51,7 @@ Optional environment:
### `build-release.sh`: Build A Release For All Platforms ### `build-release.sh`: Build a release for all platforms
Required environment: Required environment:
@ -101,7 +101,22 @@ Required environment:
### `go-build.sh`: Build The Backend ### `go-bench.sh`: Run backend benchmarks
Optional environment:
* `GO`: set an alternative name for the Go compiler.
* `TIMEOUT_FLAGS`: set timeout flags for tests. The default value is
`--timeout=30s`.
* `VERBOSE`: verbosity level. `1` shows every command that is run and every
Go package that is processed. `2` also shows subcommands and environment.
The default value is `0`, don't be verbose.
### `go-build.sh`: Build the backend
Optional environment: Optional environment:
@ -135,7 +150,7 @@ Required environment:
### `go-deps.sh`: Install Backend Dependencies ### `go-deps.sh`: Install backend dependencies
Optional environment: Optional environment:
@ -147,7 +162,25 @@ Optional environment:
### `go-lint.sh`: Run Backend Static Analyzers ### `go-fuzz.sh`: Run backend fuzz tests
Optional environment:
* `GO`: set an alternative name for the Go compiler.
* `FUZZTIME_FLAGS`: set fuss flags for tests. The default value is
`--fuzztime=20s`.
* `TIMEOUT_FLAGS`: set timeout flags for tests. The default value is
`--timeout=30s`.
* `VERBOSE`: verbosity level. `1` shows every command that is run and every
Go package that is processed. `2` also shows subcommands and environment.
The default value is `0`, don't be verbose.
### `go-lint.sh`: Run backend static analyzers
Don't forget to run `make go-tools` once first! Don't forget to run `make go-tools` once first!
@ -163,7 +196,7 @@ Optional environment:
### `go-test.sh`: Run Backend Tests ### `go-test.sh`: Run backend tests
Optional environment: Optional environment:
@ -173,7 +206,7 @@ Optional environment:
`1`, use the race detector. `1`, use the race detector.
* `TIMEOUT_FLAGS`: set timeout flags for tests. The default value is * `TIMEOUT_FLAGS`: set timeout flags for tests. The default value is
`--timeout 30s`. `--timeout=30s`.
* `VERBOSE`: verbosity level. `1` shows every command that is run and every * `VERBOSE`: verbosity level. `1` shows every command that is run and every
Go package that is processed. `2` also shows subcommands. The default Go package that is processed. `2` also shows subcommands. The default
@ -181,7 +214,7 @@ Optional environment:
### `go-tools.sh`: Install Backend Tooling ### `go-tools.sh`: Install backend tooling
Installs the Go static analysis and other tools into `${PWD}/bin`. Either add Installs the Go static analysis and other tools into `${PWD}/bin`. Either add
`${PWD}/bin` to your `$PATH` before all other entries, or use the commands `${PWD}/bin` to your `$PATH` before all other entries, or use the commands

View file

@ -107,18 +107,6 @@ cp "${dist_dir}/AdGuardHome_linux_arm_7/AdGuardHome/AdGuardHome"\
cp "${dist_dir}/AdGuardHome_linux_ppc64le/AdGuardHome/AdGuardHome"\ cp "${dist_dir}/AdGuardHome_linux_ppc64le/AdGuardHome/AdGuardHome"\
"${dist_docker}/AdGuardHome_linux_ppc64le_" "${dist_docker}/AdGuardHome_linux_ppc64le_"
# Copy the helper scripts. See file docker/Dockerfile.
dist_docker_scripts="${dist_docker}/scripts"
readonly dist_docker_scripts
mkdir -p "$dist_docker_scripts"
cp "./docker/dns-bind.awk"\
"${dist_docker_scripts}/dns-bind.awk"
cp "./docker/web-bind.awk"\
"${dist_docker_scripts}/web-bind.awk"
cp "./docker/healthcheck.sh"\
"${dist_docker_scripts}/healthcheck.sh"
# Don't use quotes with $docker_version_tag and $docker_channel_tag, because we # Don't use quotes with $docker_version_tag and $docker_channel_tag, because we
# want word splitting and or an empty space if tags are empty. # want word splitting and or an empty space if tags are empty.
# #

55
scripts/make/go-bench.sh Normal file
View file

@ -0,0 +1,55 @@
#!/bin/sh
verbose="${VERBOSE:-0}"
readonly verbose
# Verbosity levels:
# 0 = Don't print anything except for errors.
# 1 = Print commands, but not nested commands.
# 2 = Print everything.
if [ "$verbose" -gt '1' ]
then
set -x
v_flags='-v=1'
x_flags='-x=1'
elif [ "$verbose" -gt '0' ]
then
set -x
v_flags='-v=1'
x_flags='-x=0'
else
set +x
v_flags='-v=0'
x_flags='-x=0'
fi
readonly v_flags x_flags
set -e -f -u
if [ "${RACE:-1}" -eq '0' ]
then
race_flags='--race=0'
else
race_flags='--race=1'
fi
readonly race_flags
go="${GO:-go}"
count_flags='--count=1'
shuffle_flags='--shuffle=on'
timeout_flags="${TIMEOUT_FLAGS:---timeout=30s}"
readonly go count_flags shuffle_flags timeout_flags
"$go" test\
"$count_flags"\
"$shuffle_flags"\
"$race_flags"\
"$timeout_flags"\
"$x_flags"\
"$v_flags"\
--bench='.'\
--benchmem\
--benchtime=1s\
--run='^$'\
./...

58
scripts/make/go-fuzz.sh Normal file
View file

@ -0,0 +1,58 @@
#!/bin/sh
verbose="${VERBOSE:-0}"
readonly verbose
# Verbosity levels:
# 0 = Don't print anything except for errors.
# 1 = Print commands, but not nested commands.
# 2 = Print everything.
if [ "$verbose" -gt '1' ]
then
set -x
v_flags='-v=1'
x_flags='-x=1'
elif [ "$verbose" -gt '0' ]
then
set -x
v_flags='-v=1'
x_flags='-x=0'
else
set +x
v_flags='-v=0'
x_flags='-x=0'
fi
readonly v_flags x_flags
set -e -f -u
if [ "${RACE:-1}" -eq '0' ]
then
race_flags='--race=0'
else
race_flags='--race=1'
fi
readonly race_flags
go="${GO:-go}"
count_flags='--count=1'
shuffle_flags='--shuffle=on'
timeout_flags="${TIMEOUT_FLAGS:---timeout=30s}"
fuzztime_flags="${FUZZTIME_FLAGS:---fuzztime=20s}"
readonly go count_flags shuffle_flags timeout_flags fuzztime_flags
# TODO(a.garipov): File an issue about using --fuzz with multiple packages.
"$go" test\
"$count_flags"\
"$shuffle_flags"\
"$race_flags"\
"$timeout_flags"\
"$x_flags"\
"$v_flags"\
"$fuzztime_flags"\
--fuzz='.'\
--run='^$'\
./internal/filtering/rulelist/\
;

View file

@ -35,7 +35,7 @@ set -f -u
go_version="$( "${GO:-go}" version )" go_version="$( "${GO:-go}" version )"
readonly go_version readonly go_version
go_min_version='go1.19.10' go_min_version='go1.19.11'
go_version_msg=" go_version_msg="
warning: your go version (${go_version}) is different from the recommended minimal one (${go_min_version}). warning: your go version (${go_version}) is different from the recommended minimal one (${go_min_version}).
if you have the version installed, please set the GO environment variable. if you have the version installed, please set the GO environment variable.
@ -176,7 +176,10 @@ run_linter gocognit --over 10\
./internal/aghchan/\ ./internal/aghchan/\
./internal/aghhttp/\ ./internal/aghhttp/\
./internal/aghio/\ ./internal/aghio/\
./internal/filtering/hashprefix/\
./internal/filtering/rulelist/\
./internal/next/\ ./internal/next/\
./internal/rdns/\
./internal/tools/\ ./internal/tools/\
./internal/version/\ ./internal/version/\
./internal/whois/\ ./internal/whois/\
@ -210,6 +213,8 @@ run_linter gosec --quiet\
./internal/dhcpd\ ./internal/dhcpd\
./internal/dhcpsvc\ ./internal/dhcpsvc\
./internal/dnsforward\ ./internal/dnsforward\
./internal/filtering/hashprefix/\
./internal/filtering/rulelist/\
./internal/next\ ./internal/next\
./internal/schedule\ ./internal/schedule\
./internal/stats\ ./internal/stats\
@ -218,8 +223,7 @@ run_linter gosec --quiet\
./internal/whois\ ./internal/whois\
; ;
# TODO(a.garipov): Enable --blank? run_linter errcheck ./...
run_linter errcheck --asserts ./...
staticcheck_matrix=' staticcheck_matrix='
darwin: GOOS=darwin darwin: GOOS=darwin