juancarlospaco / faster-than-requests

Faster requests on Python 3
https://gist.github.com/juancarlospaco/37da34ed13a609663f55f4466c4dbc3e
MIT License
1.11k stars 89 forks source link
curl cython download-file faster-than-requests high-performance http-requests ndjson open-data python python-library python-requests python3 requests-toolbelt requests3 scrapy speed urllib urllib3 web-scraper web-scraping

Faster-than-Requests

screenshot

screenshot

Library Speed Files LOC Dependencies Developers WebSockets Multi-Threaded Web Scraper Built-in
PyWGET 152.39 1 338 Wget >17 :negative_squared_cross_mark: :negative_squared_cross_mark:
Requests 15.58 >20 2558 >=7 >527 :negative_squared_cross_mark: :negative_squared_cross_mark:
Requests (cached object) 5.50 >20 2558 >=7 >527 :negative_squared_cross_mark: :negative_squared_cross_mark:
Urllib 4.00 ??? 1200 0 (std lib) ??? :negative_squared_cross_mark: :negative_squared_cross_mark:
Urllib3 3.55 >40 5242 0 (No SSL), >=5 (SSL) >188 :negative_squared_cross_mark: :negative_squared_cross_mark:
PyCurl 0.75 >15 5932 Curl, LibCurl >50 :negative_squared_cross_mark: :negative_squared_cross_mark:
PyCurl (no SSL) 0.68 >15 5932 Curl, LibCurl >50 :negative_squared_cross_mark: :negative_squared_cross_mark:
Faster_than_requests 0.40 1 999 0 1 :heavy_check_mark: :heavy_check_mark: 7, One-Liner
- Lines Of Code counted using [CLOC](https://github.com/AlDanial/cloc). - Direct dependencies of the package when ready to run. - Benchmarks run on Docker from Dockerfile on this repo. - Developers counted from the Contributors list of Git. - Speed is IRL time to complete 10000 HTTP local requests. - Stats as of year 2020. - x86_64 64Bit AMD, SSD, Arch Linux.

Sponsors

Use

import faster_than_requests as requests

requests.get("http://httpbin.org/get")                                      # GET
requests.post("http://httpbin.org/post", "Some Data Here")                  # POST
requests.download("http://example.com/foo.jpg", "out.jpg")                  # Download a file
requests.scraper(["http://foo.io", "http://bar.io"], threads=True)          # Multi-Threaded Web Scraper
requests.scraper5(["http://foo.io"], sqlite_file_path="database.db")        # URL-to-SQLite Web Scraper
requests.scraper6(["http://python.org"], ["(www|http:|https:)+[^\s]+[\w]"]) # Regex-powered Web Scraper
requests.scraper7("http://python.org", "body > div.someclass a#someid"])    # CSS Selector Web Scraper
requests.websocket_send("ws://echo.websocket.org", "data here")             # WebSockets Binary/Text

Table Of Contents

get() post() put() head()
patch() delete() download() download2()
scraper() scraper2() scraper3() scraper4()
scraper5() scraper6() scraper7() get2str()
get2str2() get2dict() get2json()
post2str() post2dict() post2json() post2list()
download3() tuples2json() set_headers() multipartdata2str()
datauri() urlparse() urlencode() urldecode()
encodequery() encodexml() debugs() minifyhtml()
How to set DEBUG mode websocket_send() websocket_ping()
How to Install How to Windows FAQ Get Help
PyPI GitHub Actions / CI Examples Sponsors

get()

**Description:** Takes an URL string, makes an HTTP GET and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://dev.to`. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.get("http://example.com") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples. **See Also:** [get2str()](https://github.com/juancarlospaco/faster-than-requests#get2str) and [get2str2()](https://github.com/juancarlospaco/faster-than-requests#get2str2)

post()

**Description:** Takes an URL string, makes an HTTP POST and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://dev.to`. - `body` the Body data, string type, required, can be empty string. To Post Files use this too. - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.post("http://httpbin.org/post", "Some Data Here") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples.

put()

**Description:** Takes an URL string, makes an HTTP PUT and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://nim-lang.org`. - `body` the Body data, string type, required, can be empty string. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.put("http://httpbin.org/post", "Some Data Here") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples.

delete()

**Description:** Takes an URL string, makes an HTTP DELETE and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://nim-lang.org`. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.delete("http://example.com/api/something") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples.

patch()

**Description:** Takes an URL string, makes an HTTP PATCH and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://archlinux.org`. - `body` the Body data, string type, required, can be empty string. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.patch("http://example.com", "My Body Data Here") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples.

head()

**Description:** Takes an URL string, makes an HTTP HEAD and returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://nim-lang.org`. - `user_agent` User Agent, string type, optional, should not be empty string. - `max_redirects` Maximum Redirects, int type, optional, defaults to `9`, example `5`, example `1`. - `proxy_url` Proxy URL, string type, optional, if is `""` then NO Proxy is used, defaults to `""`, example `172.15.256.1:666`. - `proxy_auth` Proxy Auth, string type, optional, if `proxy_url` is `""` then is ignored, defaults to `""`. - `timeout` Timeout, int type, optional, Milliseconds precision, defaults to `-1`, example `9999`, example `666`. - `http_headers` HTTP Headers, List of Tuples type, optional, example `[("key", "value")]`, example `[("DNT", "1")]`. Examples: ```python import faster_than_requests as requests requests.head("http://example.com/api/something") ``` **Returns:** Response, `list` type, values of the list are string type, values of the list can be empty string, the lenght of the list is always 7 items, the values are like `[body, type, status, version, url, length, headers]`, you can use `to_json()` to get JSON or `to_dict()` to get a dict or `to_tuples()` to get a tuples.

to_dict()

**Description:** Convert the response to dict. **Arguments:** - `ftr_response` Response from any of the functions that return a response. **Returns:** Response, `dict` type.

to_json()

**Description:** Convert the response to Pretty-Printed JSON. **Arguments:** - `ftr_response` Response from any of the functions that return a response. **Returns:** Response, Pretty-Printed JSON.

to_tuples()

**Description:** Convert the response to a list of tuples. **Arguments:** - `ftr_response` Response from any of the functions that return a response. **Returns:** Response, list of tuples.

Extras: Go beyond requests

scraper()

**Description:** Multi-Threaded Ready-Made URL-Deduplicating Web Scraper from a list of URLs. ![](misc/multithread-scraper.png) All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. Requests are processed asynchronously. This means that it doesn’t need to wait for a request to be finished to be processed. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["http://example.io"]`. - `html_tag` HTML Tag to parse, string type, optional, defaults to `"a"` being Links, example `"h1"`. - `case_insensitive` Case Insensitive, `True` for Case Insensitive, boolean type, optional, defaults to `True`, example `True`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `threads` Passing `threads = True` uses Multi-Threading, `threads = False` will Not use Multi-Threading, boolean type, optional, omitting it will Not use Multi-Threading. Examples: ```python import faster_than_requests as requests requests.scraper(["https://nim-lang.org", "http://example.com"], threads=True) ``` **Returns:** Scraped Webs.

scraper2()

**Description:** Multi-Tag Ready-Made URL-Deduplicating Web Scraper from a list of URLs. All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. Requests are processed asynchronously. This means that it doesn’t need to wait for a request to be finished to be processed. You can think of this scraper as a parallel evolution of the original scraper. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["http://example.io"]`. - `list_of_tags` List of HTML Tags to parse, List type, optional, defaults to `["a"]` being Links, example `["h1", "h2"]`. - `case_insensitive` Case Insensitive, `True` for Case Insensitive, boolean type, optional, defaults to `True`, example `True`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `verbose` Verbose, print to terminal console the progress, bool type, optional, defaults to `True`, example `False`. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value, example `42`. - `threads` Passing `threads = True` uses Multi-Threading, `threads = False` will Not use Multi-Threading, boolean type, optional, omitting it will Not use Multi-Threading. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `5`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. Examples: ```python import faster_than_requests as requests requests.scraper2(["https://nim-lang.org", "http://example.com"], list_of_tags=["h1", "h2"], case_insensitive=False) ``` **Returns:** Scraped Webs.

scraper3()

**Description:** Multi-Tag Ready-Made URL-Deduplicating Web Scraper from a list of URLs. ![](misc/multitag-scraper.png) This Scraper is designed with lots of extra options on the arguments. All arguments are optional, it only needs the URL to get to work. Scraper is designed to be like a 2-Step Web Scraper, that makes a first pass collecting all URL Links and then a second pass actually fetching those URLs. You can think of this scraper as a parallel evolution of the original scraper. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["http://example.io"]`. - `list_of_tags` List of HTML Tags to parse, List type, optional, defaults to `["a"]` being Links, example `["h1", "h2"]`. - `case_insensitive` Case Insensitive, `True` for Case Insensitive, boolean type, optional, defaults to `True`, example `True`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `start_with` Match at the start of the line, similar to `str().startswith()`, string type, optional, example `""`. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value, example `42`. - `line_start` Slice the line at the start by this index, integer type, optional, defaults to `0` meaning no slicing since string start at index 0, example `3` cuts off 3 letters of the line at the start. - `line_end` Slice the line at the end by this *reverse* index, integer type, optional, defaults to `1` meaning no slicing since string ends at reverse index 1, example `9` cuts off 9 letters of the line at the end. - `pre_replacements` List of tuples of strings to replace *before* parsing, replacements are in parallel, List type, optional, example `[("old", "new"), ("red", "blue")]` will replace `"old"` with `"new"` and will replace `"red"` with `"blue"`. - `post_replacements` List of tuples of strings to replace *after* parsing, replacements are in parallel, List type, optional, example `[("old", "new"), ("red", "blue")]` will replace `"old"` with `"new"` and will replace `"red"` with `"blue"`. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `5`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. - `verbose` Verbose, print to terminal console the progress, bool type, optional, defaults to `True`, example `False`. Examples: ```python import faster_than_requests as requests requests.scraper3(["https://nim-lang.org", "http://example.com"], list_of_tags=["h1", "h2"], case_insensitive=False) ``` **Returns:** Scraped Webs.

scraper4()

**Description:** Images and Photos Ready-Made Web Scraper from a list of URLs. ![](misc/photo-scraper.png) The Images and Photos scraped from the first URL will be put into a new sub-folder named `0`, Images and Photos scraped from the second URL will be put into a new sub-folder named `1`, and so on. All arguments are optional, it only needs the URL to get to work. You can think of this scraper as a parallel evolution of the original scraper. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"]`. - `case_insensitive` Case Insensitive, `True` for Case Insensitive, boolean type, optional, defaults to `True`, example `True`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `visited_urls` Do not visit same URL twice, even if redirected into, keeps track of visited URLs, bool type, optional, defaults to `True`. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value, example `42`. - `folder` Directory to download Images and Photos, string type, optional, defaults to current folder, must not be empty string, example `/tmp`. - `force_extension` Force file extension to be this file extension, string type, optional, defaults to `".jpg"`, must not be empty string, example `".png"`. - `https_only` Force to download images on Secure HTTPS only ignoring plain HTTP, sometimes HTTPS may redirect to HTTP, bool type, optional, defaults to `False`, example `True`. - `html_output` Collect all scraped Images and Photos into 1 HTML file with all elements scraped, bool type, optional, defaults to `True`, example `False`. - `csv_output` Collect all scraped URLs into 1 CSV file with all links scraped, bool type, optional, defaults to `True`, example `False`. - `verbose` Verbose, print to terminal console the progress, bool type, optional, defaults to `True`, example `False`. - `print_alt` print to terminal console the `alt` attribute of the Images and Photos, bool type, optional, defaults to `False`, example `True`. - `picture` Scrap images from the new HTML5 `` tags instead of `` tags, `` are Responsive images for several resolutions but also you get duplicated images, bool type, optional, defaults to `False`, example `True`. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `5`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. Examples: ```python import faster_than_requests as requests requests.scraper4(["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"]) ``` **Returns:** None.

scraper5()

**Description:** Recursive Web Scraper to SQLite Database, you give it an URL, it gives back an SQLite. ![](misc/sqlite-scraper.png) SQLite database can be visualized with any SQLite WYSIWYG, like https://sqlitebrowser.org If the script gets interrupted like with CTRL+C it will try its best to keep data consistent. Additionally it will create a CSV file with all the scraped URLs. HTTP Headers are stored as Pretty-Printed JSON. Date and Time are stored as Unix Timestamps. All arguments are optional, it only needs the URL and SQLite file path to get to work. You can think of this scraper as a parallel evolution of the original scraper. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["https://unsplash.com/s/photos/cat", "https://unsplash.com/s/photos/dog"]`. - `sqlite_file_path` Full file path to a new SQLite Database, must be `.db` file extension, string type, required, must not be empty string, example `"scraped_data.db"`. - `skip_ends_with` Skip the URL if ends with this pattern, list type, optional, must not be empty list, example `[".jpg", ".pdf"]`. - `case_insensitive` Case Insensitive, `True` for Case Insensitive, boolean type, optional, defaults to `True`, example `True`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `visited_urls` Do not visit same URL twice, even if redirected into, keeps track of visited URLs, bool type, optional, defaults to `True`. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value, example `42`. - `https_only` Force to download images on Secure HTTPS only ignoring plain HTTP, sometimes HTTPS may redirect to HTTP, bool type, optional, defaults to `False`, example `True`. - `only200` Only commit to Database the successful scraping pages, ignore all errors, bool type, optional, example `True`. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `5`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `max_loops` Maximum total Loops to do while scraping, like a global guard for infinite redirections, integer type, optional, example `999`. - `max_deep` Maximum total scraping Recursive Deep, like a global guard for infinite deep recursivity, integer type, optional, example `999`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. Examples: ```python import faster_than_requests as requests requests.scraper5(["https://example.com"], "scraped_data.db") ``` **Returns:** None.

scraper6()

**Description:** Regex powered Web Scraper from a list of URLs. Scrap web content using a list of Perl Compatible Regular Expressions (PCRE standard). You can configure the Regular Expressions to be case insensitive or multiline or extended. This Scraper is designed for developers that know Regular Expressions. [Learn Regular Expressions.](https://github.com/ziishaned/learn-regex#translations) All arguments are optional, it only needs the URL and the Regex to get to work. You can think of this scraper as a parallel evolution of the original scraper. **Regex Arguments:** (Arguments focused on Regular Expression parsing and matching) - `list_of_regex` List of Perl Compatible Regular Expressions (PCRE standard) to match the URL against, List type, required, example `["(www|http:|https:)+[^\s]+[\w]"]`. - `case_insensitive` Case Insensitive Regular Expressions, do caseless matching, `True` for Case Insensitive, boolean type, optional, defaults to `False`, example `True`. - `multiline` Multi-Line Regular Expressions, `^` and `$` match newlines within data, boolean type, optional, defaults to `False`, example `True`. - `extended` Extended Regular Expressions, ignore all whitespaces and `#` comments, boolean type, optional, defaults to `False`, example `True`. - `dot` Dot `.` matches anything, including new lines, boolean type, optional, defaults to `False`, example `True`. - `start_with` Perl Compatible Regular Expression to match at the start of the line, similar to `str().startswith()` but with Regular Expressions, string type, optional. - `ends_with` Perl Compatible Regular Expression to match at the end of the line, similar to `str().endswith()` but with Regular Expressions, string type, optional. - `post_replacement_regex` Perl Compatible Regular Expressions (PCRE standard) to replace *after* parsing, string type, optional, this option works with `post_replacement_by`, this is like a Regex post-processing, this option is for experts on Regular Expressions. - `post_replacement_by` string **to replace by** *after* parsing, string type, optional, this option works with `post_replacement_regex`, this is like a Regex post-processing, this option is for experts on Regular Expressions. - `re_start` Perl Compatible Regular Expression matchs start at this index, positive integer type, optional, defaults to `0`, this option is for experts on Regular Expressions. **Arguments:** - `list_of_urls` List of URLs, URL must be string type, required, must not be empty list, example `["http://example.io"]`. - `deduplicate_urls` Deduplicate `list_of_urls` removing repeated URLs, boolean type, optional, defaults to `False`, example `False`. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value, example `42`. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `5`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. - `verbose` Verbose, print to terminal console the progress, bool type, optional, defaults to `True`, example `False`. Examples: ```python import faster_than_requests as requests requests.scraper6(["http://nim-lang.org", "http://python.org"], ["(www|http:|https:)+[^\s]+[\w]"]) ``` **Returns:** Scraped Webs.

scraper7()

![](https://raw.githubusercontent.com/juancarlospaco/faster-than-requests/master/css_selectors.png) **Description:** CSS Selector powered Web Scraper. Scrap web content using a CSS Selector. The CSS Syntax does NOT take Regex nor Regex-like syntax nor literal tag attribute values. All arguments are optional, it only needs the URL and CSS Selector to get to work. You can think of this scraper as a parallel evolution of the original scraper. **Arguments:** - `url` The URL, string type, required, must not be empty string, example `"http://python.org"`. - `css_selector` CSS Selector, string type, required, must not be empty string, example `"body nav.class ul.menu > li > a"`. - `agent` User Agent, string type, optional, must not be empty string. - `redirects` Maximum Redirects, integer type, optional, defaults to `9`, must be positive integer. - `timeout` Timeout, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `-1`, must be a positive integer value, example `42`. - `header` HTTP Header, any HTTP Headers can be put here, list type, optional, example `[("key", "value")]`. - `proxy_url` HTTPS Proxy Full URL, string type, optional, must not be empty string. - `proxy_auth` HTTPS Proxy Authentication, string type, optional, defaults to `""`, empty string is ignored. Examples: ```python import faster_than_requests as requests requests.scraper7("http://python.org", "body > div.class a#someid") ``` ```python import faster_than_requests as requests requests.scraper7("https://nim-lang.org", "a.pure-menu-link") [ 'Blog', 'Features', 'Download', 'Learn', 'Documentation', 'Forum', 'Source' ] ``` More examples: https://github.com/juancarlospaco/faster-than-requests/blob/master/examples/web_scraper_via_css_selectors.py **Returns:** Scraped Webs.

websocket_ping()

**Description:** WebSocket Ping. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `"ws://echo.websocket.org"`. - `data` data to send, string type, optional, can be empty string, default is empty string, example `""`. - `hangup` Close the Socket without sending a close packet, optional, default is `False`, not sending close packet can be faster. Examples: ```python import faster_than_requests as requests requests.websocket_ping("ws://echo.websocket.org") ``` **Returns:** Response, `string` type, can be empty string.

websocket_send()

**Description:** WebSocket send data, binary or text. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `"ws://echo.websocket.org"`. - `data` data to send, string type, optional, can be empty string, default is empty string, example `""`. - `is_text` if `True` data is sent as Text else as Binary, optional, default is `False`. - `hangup` Close the Socket without sending a close packet, optional, default is `False`, not sending close packet can be faster. Examples: ```python import faster_than_requests as requests requests.websocket_send("ws://echo.websocket.org", "data here") ``` **Returns:** Response, `string` type.

get2str()

**Description:** Takes an URL string, makes an HTTP GET and returns a string with the response Body. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://archlinux.org`. Examples: ```python import faster_than_requests as requests requests.get2str("http://example.com") ``` **Returns:** Response body, `string` type, can be empty string.

get2str2()

**Description:** Takes a list of URLs, makes 1 HTTP GET for each URL, and returns a list of strings with the response Body. This makes all `GET` fully parallel, in a single Thread, in a single Process. **Arguments:** - `list_of_urls` A list of the remote URLs, list type, required. Objects inside the list must be string type. Examples: ```python import faster_than_requests as requests requests.get2str2(["http://example.com/foo", "http://example.com/bar"]) # Parallel GET ``` **Returns:** List of response bodies, `list` type, values of the list are string type, values of the list can be empty string, can be empty list.

get2dict()

**Description:** Takes an URL, makes an HTTP GET, returns a dict with the response Body. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://alpinelinux.org`. Examples: ```python import faster_than_requests as requests requests.get2dict("http://example.com") ``` **Returns:** Response, `dict` type, values of the dict are string type, values of the dict can be empty string, but keys are always consistent.

get2json()

**Description:** Takes an URL, makes an HTTP GET, returns a Minified Computer-friendly single-line JSON with the response Body. **Arguments:** - `url` the remote URL, string type, required, must not be empty string, example `https://alpinelinux.org`. - `pretty_print` Pretty Printed JSON, optional, defaults to `False`. Examples: ```python import faster_than_requests as requests requests.get2json("http://example.com", pretty_print=True) ``` **Returns:** Response Body, Pretty-Printed JSON.

post2str()

**Description:** Takes an URL, makes an HTTP POST, returns the response Body as string type. **Arguments:** - `url` the remote URL, string type, required, must not be empty string. - `body` the Body data, string type, required, can be empty string. - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. Examples: ```python import faster_than_requests as requests requests.post2str("http://example.com/api/foo", "My Body Data Here") ``` **Returns:** Response body, `string` type, can be empty string.

post2dict()

**Description:** Takes an URL, makes a HTTP POST on that URL, returns a dict with the response. **Arguments:** - `url` the remote URL, string type, required, must not be empty string. - `body` the Body data, string type, required, can be empty string. - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. Examples: ```python import faster_than_requests as requests requests.post2dict("http://example.com/api/foo", "My Body Data Here") ``` **Returns:** Response, `dict` type, values of the dict are string type, values of the dict can be empty string, but keys are always consistent.

post2json()

**Description:** Takes a list of URLs, makes 1 HTTP GET for each URL, returns a list of responses. **Arguments:** - `url` the remote URL, string type, required, must not be empty string. - `body` the Body data, string type, required, can be empty string. - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. - `pretty_print` Pretty Printed JSON, optional, defaults to `False`. Examples: ```python import faster_than_requests as requests requests.post2json("http://example.com/api/foo", "My Body Data Here") ``` **Returns:** Response, string type.

post2list()

**Description:** Takes a list of URLs, makes 1 HTTP POST for each URL, returns a list of responses. **Arguments:** - `list_of_urls` the remote URLS, list type, required, the objects inside the list must be string type. - `body` the Body data, string type, required, can be empty string. - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. Examples: ```python import faster_than_requests as requests requests.post2list("http://example.com/api/foo", "My Body Data Here") ``` **Returns:** List of response bodies, `list` type, values of the list are string type, values of the list can be empty string, can be empty list.

download()

**Description:** Takes a list of URLs, makes 1 HTTP GET for each URL, returns a list of responses. **Arguments:** - `url` the remote URL, string type, required, must not be empty string. - `filename` the local filename, string type, required, must not be empty string, full path recommended, can be relative path, includes file extension. Examples: ```python import faster_than_requests as requests requests.download("http://example.com/api/foo", "my_file.ext") ``` **Returns:** None.

download2()

**Description:** Takes a list of URLs, makes 1 HTTP GET Download for each URL of the list. **Arguments:** - `list_of_files` list of tuples, tuples must be 2 items long, first item is URL and second item is filename. The remote URL, string type, required, must not be empty string, is the first item on the tuple. The local filename, string type, required, must not be empty string, can be full path, can be relative path, must include file extension. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value. - `threads` Passing `threads = True` uses Multi-Threading, `threads = False` will Not use Multi-Threading, omitting it will Not use Multi-Threading. Examples: ```python import faster_than_requests as requests requests.download2([("http://example.com/cat.jpg", "kitten.jpg"), ("http://example.com/dog.jpg", "doge.jpg")]) ``` **Returns:** None.

download3()

**Description:** Takes a list of URLs, makes 1 HTTP GET Download for each URL of the list. It will Retry again and again in loop until the file is downloaded or `tries` is `0`, whatever happens first. If all retries have failed and `tries` is `0` it will error out. **Arguments:** - `list_of_files` list of tuples, tuples must be 2 items long, first item is URL and second item is filename. The remote URL, string type, required, must not be empty string, is the first item on the tuple. The local filename, string type, required, must not be empty string, can be full path, can be relative path, must include file extension. - `delay` Delay between a download and the next one, MicroSeconds precision (1000 = 1 Second), integer type, optional, defaults to `0`, must be a positive integer value. - `tries` how many Retries to try, positive integer type, optional, defaults to `9`, must be a positive integer value. - `backoff` Back-Off between retries, positive integer type, optional, defaults to `2`, must be a positive integer value. - `jitter` Jitter applied to the Back-Off between retries (Modulo math operation), positive integer type, optional, defaults to `2`, must be a positive integer value. - `verbose` be Verbose, bool type, optional, defaults to `True`. **Returns:** None. Examples: ```python import faster_than_requests as requests requests.download3( [("http://INVALID/cat.jpg", "kitten.jpg"), ("http://INVALID/dog.jpg", "doge.jpg")], delay = 1, tries = 9, backoff = 2, jitter = 2, verbose = True, ) ``` Examples of Failed download output (intended): ```console $ python3 example_fail_all_retry.py Retry: 3 of 3 (url: "http://NONEXISTENT", filename: "a.json") No such file or directory Additional info: "Name or service not known" Retrying in 64 microseconds... Retry: 2 of 3 (url: "http://NONEXISTENT", filename: "a.json") No such file or directory Additional info: "Name or service not known" Retrying in 128 microseconds (Warning: This is the last Retry!). Retry: 1 of 3 (url: "http://NONEXISTENT", filename: "a.json") No such file or directory Additional info: "Name or service not known" Retrying in 256 microseconds (Warning: This is the last Retry!). Traceback (most recent call last): File "example_fail_all_retry.py", line 3, in downloader.download3() ... $ ```

set_headers()

**Description:** Set the HTTP Headers from the arguments. **This is for the functions that NOT allow `http_headers` as argument.** **Arguments:** - `http_headers` HTTP Headers, List of Tuples type, required, example `[("key", "value")]`, example `[("DNT", "1")]`. List of tuples, tuples must be 2 items long, must not be empty list, must not be empty tuple, the first item of the tuple is the key and second item of the tuple is value, keys must not be empty string, values can be empty string, both must the stripped. Examples: ```python import faster_than_requests as requests requests.set_headers(headers = [("key", "value")]) ``` ```python import faster_than_requests as requests requests.set_headers([("key0", "value0"), ("key1", "value1")]) ``` ```python import faster_than_requests as requests requests.set_headers([("content-type", "text/plain"), ("dnt", "1")]) ``` **Returns:** None.

multipartdata2str()

**Description:** Takes MultiPart Data and returns a string representation. Converts MultipartData to 1 human readable string. The human-friendly representation is not machine-friendly, so is not Serialization nor Stringification, just for humans. It is faster and different than stdlib `parse_multipart`. **Arguments:** - `multipart_data` MultiPart data, optional, list of tupes type, must not be empty list, example `[("key", "value")]`. Examples: ```python import faster_than_requests as requests requests.multipartdata2str([("key", "value")]) ``` **Returns:** string.

datauri()

**Description:** Takes data and returns a [standard Base64 Data URI (RFC-2397).](https://tools.ietf.org/html/rfc2397) At the time of writing Python stdlib does not have a function that returns Data URI (RFC-2397) on `base64` module. This can be used as URL on HTML/CSS/JS. It is faster and different than stdlib `base64`. **Arguments:** - `data` Arbitrary Data, string type, required. - `mime` MIME Type of `data`, string type, required, example `"text/plain"`. - `encoding` Encoding, string type, required, defaults to `"utf-8"`, example `"utf-8"`, `"utf-8"` is recommended. Examples: ```python import faster_than_requests as requests requests.datauri("Nim", "text/plain") ``` **Returns:** string.

urlparse()

**Description:** Parse any URL and return parsed primitive values like `scheme`, `username`, `password`, `hostname`, `port`, `path`, `query`, `anchor`, `opaque`, etc. It is faster and different than stdlib `urlparse`. **Arguments:** - `url` The URL, string type, required. Examples: ```python import faster_than_requests as requests requests.urlparse("https://nim-lang.org") ``` **Returns:** `scheme`, `username`, `password`, `hostname`, `port`, `path`, `query`, `anchor`, `opaque`, etc.

urlencode()

**Description:** Encodes a URL according to RFC-3986, string to string. It is faster and different than stdlib `urlencode`. **Arguments:** - `url` The URL, string type, required. - `use_plus` When `use_plus` is `true`, spaces are encoded as `+` instead of `%20`. Examples: ```python import faster_than_requests as requests requests.urlparse("https://nim-lang.org", use_plus = True) ``` **Returns:** string.

urldecode()

**Description:** Decodes a URL according to RFC-3986, string to string. It is faster and different than stdlib `unquote`. **Arguments:** - `url` The URL, string type, required. - `use_plus` When `use_plus` is `true`, spaces are decoded as `+` instead of `%20`. Examples: ```python import faster_than_requests as requests requests.urldecode(r"https%3A%2F%2Fnim-lang.org", use_plus = False) ``` **Returns:** string.

encodequery()

**Description:** Encode a URL according to RFC-3986, string to string. It is faster and different than stdlib `quote_plus`. **Arguments:** - `query` List of Tuples, required, example `[("key", "value")]`, example `[("DNT", "1")]`. - `omit_eq` If the value is an empty string then the `=""` is omitted, unless `omit_eq` is `false`. - `use_plus` When `use_plus` is `true`, spaces are decoded as `+` instead of `%20`. Examples: ```python import faster_than_requests as requests requests.encodequery([("key", "value")], use_plus = True, omit_eq = True) ``` **Returns:** string.

encodexml()

**Description:** Convert the characters `&`, `<`, `>`, `"` in a string to an HTML-safe string, output is Valid XML. Use this if you need to display text that might contain such characters in HTML, SVG or XML. It is faster and different than stdlib `html.escape`. **Arguments:** - `s` Arbitrary string, required. Examples: ```python import faster_than_requests as requests requests.encodexml("

Hello World

") ``` **Returns:** string.

minifyhtml()

**Description:** Fast HTML and SVG Minifier. Not Obfuscator. **Arguments:** - `html` HTML string, required. Examples: ```python import faster_than_requests as requests requests.minifyhtml("

Hello

World

") ``` **Returns:** string.

gen_auth_header()

**Description:** Helper for HTTP Authentication headers. Returns 1 string kinda like "Basic base64(username):base64(username)", so it can be used like `[ ("Authorization": gen_auth_header("username", "password") ) ]`. See https://github.com/juancarlospaco/faster-than-requests/issues/168#issuecomment-858999317 **Arguments:** - `username` Username string, must not be empty string, required. - `password` Password string, must not be empty string, required. **Returns:** string.

debugs

**Description:** Debug the internal Configuration of the library, takes no arguments, returns nothing, prints the pretty-printed human-friendly multi-line JSON Configuration to standard output terminal. Examples: ```python import faster_than_requests as requests requests.debugs() ``` **Arguments:** None. **Returns:** None.

optimizeGC()

**Description:** This module uses compile-time deterministic memory management GC (kinda like Rust, but for Python). Python at run-time makes a pause, runs a Garbage Collector, and resumes again after the pause. `gctricks.optimizeGC` allows you to omit the Python GC pauses at run-time temporarily on a context manager block, this is the proper way to use this module for Benchmarks!, this is optional but recommended, we did not invent this, this is inspired from work from Instagram Engineering team and battle tested by them: - https://instagram-engineering.com/dismissing-python-garbage-collection-at-instagram-4dca40b29172 This is NOT a function, it is a context manager, it takes no arguments and wont return. This calls `init_client()` at start and `close_client()` at end automatically. Examples: ```python from gctricks import optmizeGC with optmizeGC: # All your HTTP code here. Chill the GC. Calls init_client() and close_client() automatically. # GC run-time pauses enabled again. ```

init_client()

**Description:** Instantiate the HTTP Client object, for deferred initialization, call it before the start of all HTTP operations. `get()`, `post()`, `put()`, `patch()`, `delete()`, `head()` do NOT need this, because they auto-init, this exist for performance reasons to defer the initialization and was requested by the community. This is optional but recommended. Read `optimizeGC` documentation before using. **Arguments:** None. Examples: ```python import faster_than_requests as requests requests.init_client() # All your HTTP code here. ``` **Returns:** None.

close_client()

**Description:** Tear down the HTTP Client object, for deferred de-initialization, call it after the end of all HTTP operations. `get()`, `post()`, `put()`, `patch()`, `delete()`, `head()` do NOT need this, because they auto-init, this exist for performance reasons to defer the de-initialization and was requested by the community. This is optional but recommended. Read `optimizeGC` documentation before using. **Arguments:** None. Examples: ```python import faster_than_requests as requests # All your HTTP code here. requests.close_client() ``` **Returns:** None.

For more Examples check the Examples and Tests.

Instead of having a pair of functions with a lot of arguments that you should provide to make it work, we have tiny functions with very few arguments that do one thing and do it as fast as possible.

A lot of functions are oriented to Data Science, Big Data, Open Data, Web Scrapping, working with HTTP REST JSON APIs.

Install

Docker

$ ./build-docker.sh
$ ./run-docker.sh
$ ./server4benchmarks &  # Inside Docker.
$ python3 benchmark.py   # Inside Docker.

Dependencies

Platforms

Extras

More Faster Libraries...

Requisites

Windows

win-compile

The only software needed is Git for Windows and Nim.

Reboot after install. Administrator required for install. Everything must be 64Bit.

If that fails too, dont waste time and go directly for Docker for Windows..

For info about how to install Git for Windows, read Git for Windows Documentation.

For info about how to install Nim, read Nim Documentation.

For info about how to install Docker for Windows., read Docker for Windows. Documentation.

GitHub Actions Build everything from zero on each push, use it as guidance too.

nimble install nimpy
nim c -d:ssl -d:danger --app:lib --tlsEmulation:on --out:faster_than_requests.pyd faster_than_requests.nim

Sponsors

Bitcoin BTC **BEP20 Binance Smart Chain Network BSC** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **BTC Bitcoin Network** ``` 1Pnf45MgGgY32X4KDNJbutnpx96E4FxqVi ```
Ethereum ETH Dai DAI Uniswap UNI Axie Infinity AXS Smooth Love Potion SLP **BEP20 Binance Smart Chain Network BSC** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **ERC20 Ethereum Network** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ```
Tether USDT **BEP20 Binance Smart Chain Network BSC** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **ERC20 Ethereum Network** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **TRC20 Tron Network** ``` TWGft53WgWvH2mnqR8ZUXq1GD8M4gZ4Yfu ```
Solana SOL **BEP20 Binance Smart Chain Network BSC** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **SOL Solana Network** ``` FKaPSd8kTUpH7Q76d77toy1jjPGpZSxR4xbhQHyCMSGq ```
Cardano ADA **BEP20 Binance Smart Chain Network BSC** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ``` **ADA Cardano Network** ``` DdzFFzCqrht9Y1r4Yx7ouqG9yJNWeXFt69xavLdaeXdu4cQi2yXgNWagzh52o9k9YRh3ussHnBnDrg7v7W2hSXWXfBhbo2ooUKRFMieM ```
Sandbox SAND Decentraland MANA **ERC20 Ethereum Network** ``` 0xb78c4cf63274bb22f83481986157d234105ac17e ```
Algorand ALGO **ALGO Algorand Network** ``` WM54DHVZQIQDVTHMPOH6FEZ4U2AU3OBPGAFTHSCYWMFE7ETKCUUOYAW24Q ```
Binance https://pay.binance.com/en/checkout/e92e536210fd4f62b426ea7ee65b49c3

FAQ

Feel free to Fork, Clone, Download, Improve, Reimplement, Play with this Open Source. Make it 10 times faster, 10 times smaller.

Yes.

Yes.

No.

No.

I dunno. (Not supported)

No.

No.

https://github.com/juancarlospaco/faster-than-requests/issues

url="http://example.com" (1st argument always).

body="my body"

set_headers()

I dunno.

For speed performance reasons, dict is slower, bigger, heavier and mutable compared to tuple.

Maybe it works on 32Bit, but is not supported, integer sizes are too small, and performance can be worse.

Maybe it works on Python 2, but is not supported, and performance can be worse, we suggest to migrate to Python3.

Functions do not have internal try: except: blocks, so you can wrap them inside try: except: blocks if you need very resilient code.

Add at the end of the PIP install command:

--isolated --disable-pip-version-check --no-cache-dir --no-binary :all:

Not my Bug.

build.sh or build.nims

package.sh or package.nims

No.

Unmmodified raw output of Python timeit module.

Please send Pull Request to Python to improve the output of timeit.

Projects that use Cython wont count the whole Cython on the LoC, so we wont neither.

Stars

:star: @juancarlospaco
:star: @nikitavoloboev
:star: @5u4
:star: @CKristensen
:star: @Lips7
:star: @zeandrade
:star: @SoloDevOG
:star: @AM-I-Human
:star: @pauldevos
:star: @divanovGH
:star: @ali-sajjad-rizavi
:star: @George2901
:star: @jeaps17
:star: @TeeWallz
:star: @Shinji-Mimura
:star: @oozdemir83
:star: @aaman007
:star: @dungtq
:star: @bonginkosi0607
:star: @4nzor
:star: @CyberLionsNFT
:star: @Lyapsus
:star: @boskuv
:star: @jckli
:star: @VitSimon
:star: @zjmdp
:star: @maxclac
:star: @krishna2206
:star: @KhushC-03
:star: @nicksnell
:star: @skandix
:star: @gioleppe
:star: @mvandermeulen
:star: @Vexarr
:star: @baajarmeh
:star: @znel2002
:star: @matkuki
:star: @SmartManoj
:star: @SmartManoj
:star: @Zardex1337
:star: @jocago
:star: @gnimmel
:star: @sting8k
:star: @c4p-n1ck
:star: @w0xi
:star: @Bing-su
:star: @AdrianCert
:star: @asanoop24
:star: @zaphodfortytwo
:star: @Ozerioss
:star: @nkot56297
:star: @sendyputra
:star: @thedrow
:star: @MinhHuyDev
:star: @rapjul
:star: @Arthavruksha
:star: @xdroff
:star: @ShodhArthavruksha
:star: @aziddddd
:star: @breezechen
:star: @TobeTek
:star: @antonpetrov145
:star: @MartianDominic
:star: @sk37025
:star: @Alexb309
:star: @alimp5
:star: @oldhroft
:star: @davidcralph
:star: @rainmana
:star: @amitdo
:star: @atlassion
:star: @cytsai1008
:star: @bsouthern
:star: @techrattt
:star: @vnicetn
:star: @Perry-xD
:star: @israelvf
:star: @BernardoOlisan
:star: @ZenoMilk12
:star: @rundef
:star: @semihyesilyurt
:star: @dunaevv
:star: @mlueckl
:star: @johnrlive
:star: @CrazyBonze
:star: @v-JiangNan
:star: @justfly50
:star: @mamert
:star: @ccamateur
:star: @5l1v3r1
:star: @Wykiki
:star: @Kladdkaka
:star: @giubaru
:star: @eamigo86
:star: @eadwinCode
:star: @s4ke
:star: @DG4MES
:star: @AlexJupiterian
:star: @baodinhaaa
:star: @a-golda
:star: @Furkan-izgi
:star: @Abdulvoris101
:star: @devlenni
:star: @kasahh
:star: @vishaltanwar96
:star: @codehangen
:star: @svenko99
:star: @kb1900
:star: @lusteroak
:star: @nitheesh-cpu
:star: @techpixel
:star: @tk-iitd
:star: @smartniz
:star: @popcrocks
:star: @senic35
:star: @NychetheAwesome
:star: @rafalohaki
:star: @Danipulok
:star: @lwinkelm
:star: @sunlei
:star: @Minnikeswar
:star: @theol-git
:star: @Mohammad-Mohsen
:star: @neeksor
:star: @0xNev
:star: @imvast
:star: @daweedkob
:star: @Landcruiser87
:star: @kirillzhosul
:star: @FurkanEdizkan
:star: @sodinokibi
:star: @stepan-zubkov
:star: @Nexlab-One
:star: @PApostol
:star: @callmeAsadUllah
:star: @jaredv1
:star: @Goblin80
:star: @michikxd
:star: @babywyrm
:star: @MooneDrJune
:star: @grknbyk
:star: @francomerendan
:star: @noudin-ledger
:star: @chip-felton-montage
:star: @Ruddy35
:star: @xilicode
:star: @BrianTurza
:star: @oguh43
:star: @oyoxo
:star: @encoreshao
:star: @peter279k
:star: @xalien10
:star: @DahnJ
:star: @ld909
:star: @lafabo
:star: @AndrewGPU
:star: @jerheff
:star: @wtv-piyush
:star: @themorya
:star: @frank-cq
:star: @sunilravilla
:star: @root-11
:star: @BrianPugh
:star: @kioenguyen0207
:star: @dotpmrcunha
:star: @eldar-mustafayev
:star: @krishna2206
:star: @mrkprdo
:star: @pythoninthegrass
:star: @jaysontran-novobi
:star: @eleqtrizit
:star: @BlackNurse
:star: @rruimor
:star: @gosoxharp
:star: @scripthon
:star: @WillianFF
:star: @yashprakash13
:star: @rhyd0n
:star: @meet-ai
:star: @Cremeux
:star: @hawk-roman-rey
:star: @OldMidnight
:star: @christos-bsq
:star: @Xenia101
:star: @beholders-eye
:star: @lectinua
:star: @pietroppeter
:star: @linwaytin
:star: @0x13A0F
:star: @ocervell
:star: @ZhymabekRoman
:star: @kabartay
:star: @tkanhe
:star: @yudelevi
:star: @kantarcise
:star: @fernando-aristizabal
:star: @BrianNTang
:star: @asen-mitov
:star: @mrgick
:star: @jpmanson
:star: @thvadora
:star: @jramosss
:star: @AleksandrUsolcev
:star: @eantho
:star: @RodrigoTorresWeb
:star: @mighty-phoenix
:star: @DSDanielPark
:star: @0xHaris
:star: @watchakorn-18k
:star: @watchakorn-18k
:star: @enesalsat
:star: @eal
:star: @abhie-lp
:star: @martinskou
:star: @SHjalilo
:star: @chrisgoddard
:star: @OuMal
:star: @dylanosaur
:star: @Seirdy
:star: @mix-protocol
:star: @hoangthanh168
:star: @LaffX
:star: @Kanchangk
:star: @epistemicjanitor
:star: @danielsalvador
:star: @Utone
:star: @andimahathir
:star: @alexvivanov
:star: @resort-io
:star: @itschasa
:star: @Kensingtonn
:star: @christopherhall1815
:star: @edwardlopez0311
:star: @XMVZ
:star: @robd003
:star: @Tatsh
:star: @movrsi
:star: @rachen
:star: @DavideGalilei
:star: @linusheinz
:star: @levushakov
:star: @zerintg
:star: @arynyklas
:star: @infamix
:star: @aspekts
:star: @shahabejaz
:star: @vovavili
:star: @FingonCelebrindal
:star: @ysnbogt
:star: @Athroniaeth
:star: @hansalemaos
:star: @xTrayambak
:star: @extrememicro
:star: @Pineapple217
:star: @PiyushDixit96
:star: @NateShoffner
:star: @noe1sanji
:star: @bsljth
:star: @hellogeraldblah
:star: @ande128
:star: @bsljth
:star: @Murplugg
:star: @juguerre
:star: @ande128
:star: @atksh
:star: @k-kawakami213
:star: @fredlarochelle
:star: @emekeh
:star: @KNipunDananjaya
:star: @cryoz
:star: @weztask
:star: @retouching
:star: @degaur
:star: @testtsubscribe
:star: @catsuns
:star: @Leiiib
:star: @mqt2r
:star: @solarsbeans
:star: @F1ashhimself
:star: @IsakWesterlund
:star: @Mr-Jef
:star: @ghbook
:star: @zyr1on
:star: @goznauk
:star: @scmanjarrez
:star: @NiYeh
:star: @Sepehr0011
:star: @JarvanLei
:star: @kp6c6p6c
:star: @DrayChou
:star: @szmyty
:star: @f1refa11
:star: @ddzzhen
:star: @flrngel
:star: @Sergimayol
:star: @Redrrx
:star: @ilyazub
:star: @AlskaPark
:star: @louyongjiu
:star: @adbforlife
:star: @rahulmr
:star: @soyandrey
:star: @malaVydra
:star: @RookCube
:star: @xjzh123
:star: @Ruy-Araujo
:star: @vldkhramtsov
:star: @watsonhaw5566
:star: @kumchick2055
:star: @Stallon-niranjan
:star: @devshjeon
:star: @euchaliptus
:star: @S75XD
:star: @gster
:star: @QuanDeppChaii
:star: @SevenworksDev
:star: @cframe1337
:star: @Mdevpro78
:star: @ging-dev
:star: @SYFH
:star: @Vchase-7047
:star: @ZhReimu
:star: @elvishoo
:star: @KeepCodeing
:star: @Fumeze
:star: @JerryLiao26
:star: @ashrafnezhad-hamidreza
:star: @Lofimit
:star: @shaohaiyang
:star: @668
:star: @baby0o01999
:star: @lightcax
:star: @gister9000
:star: @BiltuDas1
:star: @ttycelery
:star: @raihan-faza
:star: @hiiruki
:star: @MhankBarBar
:star: @MujurID
:star: @tommy-ca
:star: @nichdemos
:star: @CheeseTurtle
:star: @sleepingcat4
:star: @bmouler
:star: @MaxMorais
:star: @juzt3
:star: @dotsource
:star: @abdalrahman-saqr
:star: @AlexZotikov
:star: @Chaunice
:star: @hewhocannotbetamed
:star: @xiaojiujiuY9
:star: @vvanglro
:star: @KadeWuVungle
:star: @CC1001001
:star: @akeshmiri
:star: @visualrobots
:star: @vypivshiy
:star: @ttrzcinski
:star: @FastFingertips
:star: @christianmalek
:star: @simms21
:star: @cenviity
:star: @iLollek
:star: @ChemicalNRG
:star: @Ehsan-U
:star: @hosven
:star: @JosepHyv
:star: @luczay
:star: @Moat6
:star: @jydxkj
:star: @ivanrvpereira
:star: @TxQISchokEZz
:star: @kerrycobb
:star: @lucasnuic
:star: @lyenliang
:star: @danila-panteleev
:star: @saarsil
:star: @amitness
:star: @LaxmanSinghTomar
:star: @pawanpaudel93
:star: @dcyoung
:star: @divyamani1
:star: @manzak
:star: @dddyom
:star: @zwiebelslayer
:star: @WangWei90
:star: @bpenedo
:star: @GhostYiL
:star: @Darkcast
:star: @akasakaid
:star: @rhysdg
:star: @ahmadmfani
:star: @naufaljct48
:star: @amiune
:star: @marylnrose
:star: @himanshu076
:star: @herlangga72
:star: @SamDc73
:star: @ewnprn112
:star: @rainmanzzz
:star: @ranjian0
:star: @joelvaneenwyk
:star: @shalahu
:star: @warezit
:star: @firasuke
:star: @bit-code
:star: @beingDave
:star: @rekayno
:star: @EventGamer67
:star: @arthurauffray
:star: @richardadonnell
:star: @cOborski
:star: @momvov
:star: @M-logique
:star: @mohammad87115
:star: @Vorrik
:star: @aleemahmed96