A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.
Summary
Google recommends now-a-days we use
X-Robots-Tag
, not a robots.txt file.https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag#xrobotstag
https://developers.google.com/search/docs/crawling-indexing/robots/intro