Introducing Fetch:
A Web Scraping Platform Engineered for Simplicity & Scale

We make it simple for developers to create, deploy, and maintain custom web scrapers at scale

Sign Up for Beta Access   Learn More

Sign Up now and get a $100 FREE credit

Save Time & Effort

Short Learning Curve. Easy to use Platform for Web Scraping and Web Crawling

Integrated Development Flow

Robust End to End Infrastructure for your Team to Develop, Run & Maintain Web Scrapers & Crawlers

Full API Access

Integrate your apps to interact with scrapers and data.

Git Integration

Easily deploy from Github or any other Git repository.

Designed for Developers

Easily develop & maintain scrapers in Ruby language.

Ebay Scraper Example

Basic Ebay scraper that loops through the results page and details page. A great script to start building your ebay scraper on.

  View Source

View other scrapers

# initialize nokogiri
nokogiri = Nokogiri.HTML(content)

# get the listings
listings = nokogiri.css('ul.b-list__items_nofooter li.s-item')

# loop through the listings
listings.each do |listing|
    # save the product info to outputs.
    outputs << {
      _collection: "products",
      title: listing.at_css('h3.s-item__title')&.text,
      price: listing.at_css('.s-item__price')&.text
    }

    # enqueue more pages to be scraped
    pages << {
        url: item_link['href'] unless item_link.nil?,
        page_type: 'details'
      }
end

Features

Auto Proxy Rotation

No need to worry about IP bans, we auto rotate IPs on any requests that are made.

Randomized User Agents

Avoid fingerprinting of your scraper requests by our auto-randomization of user agents.

Shared Cache

Speedup scraping time and lower costs by scraping from our shared cache of web pages.

Javascript Rendering

Render pages that has javascript, so that you can easily scrape complex pages.

Custom Rubygem

Use your favorite rubygems that can easily help you scrape better.

Easy troubleshooting of scrapers

View the scraping log to pinpoint bugs in your scraper

Parallel Scraping

Whether you want to scrape multiple websites at once, or scrape one site faster, we can handle it.

Cron Based Scheduler

Use CRON's powerful scheduling syntax to schedule your scraper to run on your specified time.

Export to Various Formats

Easily export to JSON, CSV, or other formats.

 

Flexible Pricing

Fetch allows the flexibility of allocating parallel workers to your scrapers.
The more workers you purchase, the more faster your scrapers go. Workers are priced per hour the scraper runs.

  • Pay as you go
  • Standard Workers

    Allows you to scrape using regular HTTP Request

    Price Per Worker: USD$.07/hour

    Total: 1

  • Browser Workers

    Allows you to scrape using a Chrome browser

    Price Per Worker: USD$.14/hour

    Total: 0

  • Estimated Costs

    Per Hour Per Day Per Week Per Month
    Web Scraped Pages* 416 9,984 69,888 300,000
    Cache Scraped Pages* 1,250 30,000 2,100,000 900,000
    Total Costs** $0.277 $6.648 $45.536 $200

    *Approximate. Result varies depending on how performant the target server is, etc.
    ** Workers are priced per hour. You can start and stop the scraper at any time, and we will total it and rounded to the nearest hour.

  • Enterprise
  • We offer everything needed to run web scrapers at scale. Get in touch for details.
  • Scraper Development & Maintenance
  • Integrations
  • Pricing at scale

Contact Us

Say hi and tell us more about your needs

AnswersEngine.com
10 Dundas St E,
Toronto, ON M5B 2G9
Canada
P: +1(347) 835-5558

Answers Engine is a Canadian Based Startup
and is Incubated in the DMZ Incubator
at Ryerson University, Toronto, Canada

© 2018 AnswersEngine.com