Introducing Fetch
A Global Cache of Public Web Pages that Anyone can Scrape

Empowering Developers to Scrape the Web at Large Scale by Utilizing a Collectively Built Cache of Public Web Pages

Sign Up for Beta Access   Learn More

Sign Up now and get a $100 FREE credit

Empowering People with Knowledge

Empowering People with the Valuable Pieces of Knowledge that can be Extracted Easily At Large Scale from the Whole Web.

Being a Good Neighbor

52% of Internet Traffic are Generated by Bots. By Scraping the Cache, You are Freeing the World's Bandwidth and Resources Meant for Real People, Not Bots.

Removing Barriers

Only a Few Organizations in the World have the Capability to Take on the Massive Effort of Crawling Through the Whole Web. You are Helping Remove Barriers by Making the Whole Web More Accessible.

How it Works

Cache built collectively

1. Run Scraper

Run your web scraper and choose how fresh the cached contents should be. If cache exists within your specified freshness criteria, it will scrape the cache. Otherwise, it will scrape fresh contents from the web.

2. Cache Built Collectively

Anytime your scraper fetches fresh contents from the web, these contents will be stored on the Global Cache until the next time they get replaced with fresher contents from other scrapers.

3. Everyone benefits

As everyone scrapes the web, the Global Cache gets populated, and everyone benefits by having faster speed and lower costs of scraping.

Designed for Developers

Easily develop & maintain scrapers in Ruby language.

Ebay Scraper Example

Basic Ebay scraper that loops through the results page and details page. A great script to start building your ebay scraper on.

  View Source

View other scrapers

# initialize nokogiri
nokogiri = Nokogiri.HTML(content)

# get the listings
listings = nokogiri.css('ul.b-list__items_nofooter li.s-item')

# loop through the listings
listings.each do |listing|
    # save the product info to outputs.
    outputs << {
      _collection: "products",
      title: listing.at_css('h3.s-item__title')&.text,
      price: listing.at_css('.s-item__price')&.text
    }

    # enqueue more pages to be scraped
    pages << {
        url: item_link['href'] unless item_link.nil?,
        page_type: 'details'
      }
end

Features

Integrated Development Flow

Robust End to End Infrastructure for your Team to Develop, Run & Maintain Web Scrapers & Crawlers

Auto Proxy Rotation

No need to worry about IP bans, we auto rotate IPs on any requests that are made.

Randomized User Agents

Avoid fingerprinting of your scraper requests by our auto-randomization of user agents.

Save Time & Effort

Short Learning Curve. Easy to use Platform for Web Scraping and Crawling

Full API Access

Integrate your apps to interact with scrapers and data.

Javascript Rendering

Render pages that has javascript, so that you can easily scrape complex pages.

Custom Rubygem

Use your favorite rubygems that can easily help you scrape better.

Easy troubleshooting of scrapers

View the scraping log to pinpoint bugs in your scraper

Git Integration

Easily deploy from Github or any other Git repository.

Parallel Scraping

Whether you want to scrape multiple websites at once, or scrape one site faster, we can handle it.

Cron Based Scheduler

Use CRON's powerful scheduling syntax to schedule your scraper to run on your specified time.

Export to Various Formats

Easily export to JSON, CSV, or other formats.

 

Flexible Pricing

Fetch allows the flexibility of allocating parallel workers to your scrapers.
The more workers you purchase, the more faster your scrapers go. Workers are priced per hour the scraper runs.

  • Pay as you go
  • Standard Workers

    Allows you to scrape using regular HTTP Request

    Price Per Worker: USD$.07/hour

    Total: 1

  • Browser Workers

    Allows you to scrape using a Chrome browser

    Price Per Worker: USD$.14/hour

    Total: 0

  • Estimated Costs

    Per Hour Per Day Per Week Per Month
    Web Scraped Pages* 416 9,984 69,888 300,000
    Cache Scraped Pages* 1,250 30,000 2,100,000 900,000
    Total Costs** $0.277 $6.648 $45.536 $200

    *Approximate. Result varies depending on how performant the target server is, etc.
    ** Workers are priced per hour. You can start and stop the scraper at any time, and we will total it and rounded to the nearest hour.

  • Enterprise
  • We offer everything needed to run web scrapers at scale. Get in touch for details.
  • Scraper Development & Maintenance
  • Integrations
  • Pricing at scale

Contact Us

Say hi and tell us more about your needs

AnswersEngine.com
10 Dundas St E,
Toronto, ON M5B 2G9
Canada
P: +1(347) 835-5558

Answers Engine is a Canadian Based Startup
and is Incubated in the DMZ Incubator
at Ryerson University, Toronto, Canada

© 2018 AnswersEngine.com