www.gov.uk
Looks Normal
Standard signals observed
This reflects observed technical signals, not a judgment of the website's intent or safety. Wondering if Gov is legit? Below you'll find technical signals for www.gov.uk including domain age, HTTPS security, DNS records, and redirect behavior to help you decide if this website is trustworthy.
The domain www.gov.uk hosts an active website with DNS records resolving to multiple IP addresses. It uses HTTPS exclusively, redirecting HTTP traffic to a secure connection, and holds a valid wildcard certificate issued by GlobalSign RSA OV SSL CA 2018, expiring in late 2026. The domain is registered through Nominet UK, and the site appears to utilize technologies such as Varnish and Rails. These signals describe technical characteristics only and do not indicate intent or safety.
Final URL:
https://www.gov.uk/
Redirect chain
http://www.gov.uk/
→
https://www.gov.uk/
https://www.gov.uk/
Expires in 295 days
Certificate details
Subject: /C=GB/ST=Greater London/L=London/O=Government Digital Service/CN=www.gov.uk
Issuer: /C=BE/O=GlobalSign nv-sa/CN=GlobalSign RSA OV SSL CA 2018
Chain: 2 certificates
Subject Alt Names (13)
www.gov.uk
matches
*.businesslink.gov.uk
*.direct.gov.uk
*.publishing.service.gov.uk
*.cabinet-office.gov.uk
assets.digital.cabinet-office.gov.uk
service.gov.uk
data.gov.uk
+ 5 more
IP addresses & nameservers
A: 151.101.0.144, 151.101.128.144, 151.101.64.144, 151.101.192.144
AAAA: 2a04:4e42::144, 2a04:4e42:200::144, 2a04:4e42:400::144, 2a04:4e42:600::144
More details
Status: server update prohibited server delete prohibited server transfer prohibited
Nameservers: dns1.nic.uk., dns2.nic.uk., ...
These directives apply to crawlers requesting this host.
Directive breakdown
Sitemap URLs
https://www.gov.uk/sitemap.xmlRaw content
User-agent: * Disallow: /*/print$ # Don't allow indexing of site search Disallow: /search/all* Sitemap: https://www.gov.uk/sitemap.xml # https://ahrefs.com/robot/ crawls the site frequently User-agent: AhrefsBot Crawl-delay: 10 # https://www.deepcrawl.com/bot/ makes lots of requests. Ideally we'd slow it # down rather than blocking it but it doesn't mention whether or not it # supports crawl-delay. User-agent: deepcrawl Disallow: / # Complaints of 429 'Too many requests' seem to be coming from SharePoint servers # (https://social.msdn.microsoft.com/Forums/en-US/3ea268ed-58a6-4166-ab40-d3f4fc55fef4) # The robot doesn't recognise its User-Agent string, see the MS support article: # https://support.microsoft.com/en-us/help/3019711/the-sharepoint-server-crawler-ignores-directives-in-robots-txt User-agent: MS Search 6.0 Robot Disallow: /
Robots.txt directives are advisory instructions for crawlers and do not enforce access control.
Detection evidence
Technologies detected from HTTP headers and HTML patterns. Detection is passive and may not capture all technologies.