This project was focused around web crawling using a python library called Scrapy. The goal was to build a web crawler that would crawl the website (www.dealnews.com) and fecth all the deals and place them into relevant categories. Initially, the deals were stored in a local server, but after integrating an automatic crawl scheduler (crawled data four times a day) the storage was shifted to an Amazon S3 server. Additionally, a recommender system was built that used k-NN to recommend deals to users based on their search preferences. The code repository for this project is private.