News from this site

 Rental advertising space, please contact the webmaster if you need cooperation


+focus
focused

classification  

no classification

tag  

no tag

date  

no datas

How to catch requests.get() exceptions

posted on 2024-12-02 22:06     read(175)     comment(0)     like(8)     collect(4)


I'm working on a web scraper for yellowpages.com, which seems to be working well overall. However, while iterating through the pagination of a long query, requests.get(url) will randomly return <Response [503]> or <Response [404]>. Occassionally, I will receive worse exceptions, such as:

requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.yellowpages.com', port=80): Max retries exceeded with url: /search?search_terms=florists&geo_location_terms=FL&page=22 (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10053] An established connection was aborted by the software in your host machine',))

Using time.sleep() seems to eliminate the 503 errors, but 404s and exceptions remain issues.

I'm trying to figure out how to "catch" the various responses, so I can make changes (wait, change proxy, change user-agent) and try again and/or move on. Pseudocode something like this:

If error/exception with request.get:
    wait and/or change proxy and user agent
    retry request.get
else:
    pass

At this point, I can't even seem to capture an issue using:

try:
    r = requests.get(url)
except requests.exceptions.RequestException as e:
    print (e)
    import sys #only added here, because it's not part of my stable code below
    sys.exit()

Full code for where I'm starting from on github and below:

import requests
from bs4 import BeautifulSoup
import itertools
import csv

# Search criteria
search_terms = ["florists", "pharmacies"]
search_locations = ['CA', 'FL']

# Structure for Data
answer_list = []
csv_columns = ['Name', 'Phone Number', 'Street Address', 'City', 'State', 'Zip Code']


# Turns list of lists into csv file
def write_to_csv(csv_file, csv_columns, answer_list):
    with open(csv_file, 'w') as csvfile:
        writer = csv.writer(csvfile, lineterminator='\n')
        writer.writerow(csv_columns)
        writer.writerows(answer_list)


# Creates url from search criteria and current page
def url(search_term, location, page_number):
    template = 'http://www.yellowpages.com/search?search_terms={search_term}&geo_location_terms={location}&page={page_number}'
    return template.format(search_term=search_term, location=location, page_number=page_number)


# Finds all the contact information for a record
def find_contact_info(record):
    holder_list = []
    name = record.find(attrs={'class': 'business-name'})
    holder_list.append(name.text if name is not None else "")
    phone_number = record.find(attrs={'class': 'phones phone primary'})
    holder_list.append(phone_number.text if phone_number is not None else "")
    street_address = record.find(attrs={'class': 'street-address'})
    holder_list.append(street_address.text if street_address is not None else "")
    city = record.find(attrs={'class': 'locality'})
    holder_list.append(city.text if city is not None else "")
    state = record.find(attrs={'itemprop': 'addressRegion'})
    holder_list.append(state.text if state is not None else "")
    zip_code = record.find(attrs={'itemprop': 'postalCode'})
    holder_list.append(zip_code.text if zip_code is not None else "")
    return holder_list


# Main program
def main():
    for search_term, search_location in itertools.product(search_terms, search_locations):
        i = 0
        while True:
            i += 1
            url = url(search_term, search_location, i)
            r = requests.get(url)
            soup = BeautifulSoup(r.text, "html.parser")
            main = soup.find(attrs={'class': 'search-results organic'})
            page_nav = soup.find(attrs={'class': 'pagination'})
            records = main.find_all(attrs={'class': 'info'})
            for record in records:
                answer_list.append(find_contact_info(record))
            if not page_nav.find(attrs={'class': 'next ajax-page'}):
                csv_file = "YP_" + search_term + "_" + search_location + ".csv"
                write_to_csv(csv_file, csv_columns, answer_list)  # output data to csv file
                break

if __name__ == '__main__':
    main()

Thank you in advance for taking the time to read this long post/reply :)


solution


I've been doing something similar, and this is working for me (mostly):

# For handling the requests to the webpages
import requests
from requests_negotiate_sspi import HttpNegotiateAuth


# Test results, 1 record per URL to test
w = open(r'C:\Temp\URL_Test_Results.txt', 'w')

# For errors only
err = open(r'C:\Temp\URL_Test_Error_Log.txt', 'w')

print('Starting process')

def test_url(url):
    # Test the URL and write the results out to the log files.

    # Had to disable the warnings, by turning off the verify option, a warning is generated as the
    # website certificates are not checked, so results could be "bad". The main site throws errors
    # into the log for each test if we don't turn it off though.
    requests.packages.urllib3.disable_warnings()
    headers={'User-Agent': 'Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'}
    print('Testing ' + url)
    # Try the website link, check for errors.
    try:
        response = requests.get(url, auth=HttpNegotiateAuth(), verify=False, headers=headers, timeout=5)
    except requests.exceptions.HTTPError as e:
        print('HTTP Error')
        print(e)
        w.write('HTTP Error, check error log' + '\n')
        err.write('HTTP Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
    except requests.exceptions.ConnectionError as e:
        # some external sites come through this, even though the links work through the browser
        # I suspect that there's some blocking in place to prevent scraping...
        # I could probably work around this somehow.
        print('Connection error')
        print(e)
        w.write('Connection error, check error log' + '\n')
        err.write(str('Connection Error') + '\n' + url + '\n' + str(e) + '\n' + '***********' + '\n' + '\n')
    except requests.exceptions.RequestException as e:
        # Any other error types
        print('Other error')
        print(e)
        w.write('Unknown Error' + '\n')
        err.write('Unknown Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
    else:
        # Note that a 404 is still 'successful' as we got a valid response back, so it comes through here
        # not one of the exceptions above.
        response = requests.get(url, auth=HttpNegotiateAuth(), verify=False)
        print(response.status_code)
        w.write(str(response.status_code) + '\n')
        print('Success! Response code:', response.status_code)
    print('========================')

test_url('https://stackoverflow.com/')

I'm currently still having some problems with certain sites timing out, you can follow my attempts to resolve that here: 2 Valid URLs, requests.get() fails on 1 but not the other. Why?



Category of website: technical article > Q&A

Author:qs

link:http://www.pythonblackhole.com/blog/article/247234/83f3ebc3ffcc4f2b8678/

source:python black hole net

Please indicate the source for any form of reprinting. If any infringement is discovered, it will be held legally responsible.

8 0
collect article
collected

Comment content: (supports up to 255 characters)