nih.gov
Looks Normal
Standard signals observed
This reflects observed technical signals, not a judgment of the website's intent or safety. Wondering if Nih is legit? Below you'll find technical signals for nih.gov including domain age, HTTPS security, DNS records, and redirect behavior to help you decide if this website is trustworthy.
The domain nih.gov is an active website with DNS records pointing to the IP address 156.40.212.210 and using nameservers ns.nih.gov and ns2.nih.gov. Registered approximately 28 years ago via get.gov, the domain enforces HTTPS with a valid multi-domain certificate issued by Go Daddy Secure Certificate Authority - G2 that expires in February 2026. The site includes one HTTP redirect to its secure URL and appears to use Drupal as its content management system. These signals describe technical characteristics only and do not indicate intent or safety.
Final URL:
https://www.nih.gov/
Redirect chain
https://nih.gov/
→
https://www.nih.gov/
https://www.nih.gov/
Expires in 348 days
Certificate details
Subject: /CN=www.nih.gov
Issuer: /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http:\/\/certs.godaddy.com\/repository\//CN=Go Daddy Secure Certificate Authority - G2
Chain: 4 certificates
Subject Alt Names (28)
salud.nih.gov
devtestdomain.nih.gov
stagetestdomain.nih.gov
stagetestdomain6.nih.gov
stagetestdomain5.nih.gov
testdomain2.nih.gov
stagetestdomain2.nih.gov
stagetestdomain7.nih.gov
+ 20 more
IP addresses & nameservers
A: 156.40.212.210
NS: ns.nih.gov, ns2.nih.gov, ns3.nih.gov
MX records (8)
TXT records (56)
da1dfl8ipaafpqsh40iiuubf7nf71j9q2oprakr7gtedlcau485krh426i3oiis217bktj8o4nd2m2sv6eiguevqe64cojov3psnedd2uub8i3goc70aj1kvl0neuhle5v+ 51 more
More details
Last changed: Aug 27, 2025
Status: server transfer prohibited
Nameservers: ns.nih.gov, ns2.nih.gov, ...
Final URL: https://www.nih.gov/robots.txt
These directives apply to crawlers requesting this host.
Directive breakdown
Raw content
# # robots.txt # # This file is to prevent the crawling and indexing of certain parts # of your site by web crawlers and spiders run by sites like Yahoo! # and Google. By telling these "robots" where not to go on your site, # you save bandwidth and server resources. # # This file will be ignored unless it is at the root of your host: # Used: http://example.com/robots.txt # Ignored: http://example.com/site/robots.txt # # For more information about the robots.txt standard, see: # http://www.robotstxt.org/robotstxt.html User-agent: * # CSS, JS, Images Allow: /core/*.css$ Allow: /core/*.css? Allow: /core/*.js$ Allow: /core/*.js? Allow: /core/*.gif Allow: /core/*.jpg Allow: /core/*.jpeg Allow: /core/*.png Allow: /core/*.svg Allow: /profiles/*.css$ Allow: /profiles/*.css? Allow: /profiles/*.js$ Allow: /profiles/*.js? Allow: /profiles/*.gif Allow: /profiles/*.jpg Allow: /profiles/*.jpeg Allow: /profiles/*.png Allow: /profiles/*.svg # Directories Disallow: /core/ Disallow: /profiles/ # Files Disallow: /README.md Disallow: /composer/Metapackage/README.txt Disallow: /composer/Plugin/ProjectMessage/README.md Disallow: /composer/Plugin/Scaffold/README.md Disallow: /composer/Plugin/VendorHardening/README.txt Disallow: /composer/Template/README.txt Disallow: /modules/README.txt Disallow: /sites/README.txt Disallow: /themes/README.txt Disallow: /web.config # Paths (clean URLs) Disallow: /admin/ Disallow: /comment/reply/ Disallow: /filter/tips Disallow: /node/add/ Disallow: /search/ Disallow: /user/register Disallow: /user/password Disallow: /user/login Disallow: /user/logout Disallow: /media/oembed Disallow: /*/media/oembed # Paths (no clean URLs) Disallow: /index.php/admin/ Disallow: /index.php/comment/reply/ Disallow: /index.php/filter/tips Disallow: /index.php/node/add/ Disallow: /index.php/search/ Disallow: /index.php/user/password Disallow: /index.php/user/register Disallow: /index.php/user/login Disallow: /index.php/user/logout Disallow: /index.php/media/oembed Disallow: ... (truncated)
Robots.txt directives are advisory instructions for crawlers and do not enforce access control.
Detection evidence
Technologies detected from HTTP headers and HTML patterns. Detection is passive and may not capture all technologies.