|
1 |
| -# GPT-based Automated Web Scraper |
| 1 | +# AI Web Scraper |
2 | 2 |
|
3 |
| - |
| 3 | +This project is an AI-powered web scraper that allows you to extract information from HTML sources based on user-defined requirements. It generates scraping code and executes it to retrieve the desired data. |
4 | 4 |
|
5 |
| -This GPT-based Universal Web Scraper is a project that allows users to automate web scraping effortlessly by leveraging GPT models to analyze website structures, user requirements, and network traffic, streamlining the data extraction process. |
| 5 | +## Prerequisites |
6 | 6 |
|
7 |
| -**Note**: The GPT prompt for analyzing API calls is still in progress and may not return accurate results at this time. We are working on improving the prompt to provide better analysis results. |
| 7 | +Before running the AI Web Scraper, ensure you have the following prerequisites installed: |
8 | 8 |
|
9 |
| -## Documentation |
10 |
| - |
11 |
| -Detailed information about the project can be found in the following documents: |
12 |
| - |
13 |
| -- [Technical Design Document (TDD)](tdd.md): The TDD provides a comprehensive overview of the system architecture, component design, and implementation details. |
14 |
| -- [Product Requirements Document (PRD)](prd.md): The PRD outlines the features, functionality, and requirements of the GPT-based Universal Web Scraper. |
15 |
| - |
16 |
| -## Main Components |
17 |
| - |
18 |
| -1. `gpt_interaction`: Handles communication with the GPT model and manages user interaction to gather scraping requirements. |
19 |
| -2. `scraper_generation`: Generates scraper code based on the results of the website structure analysis and user requirements. |
20 |
| -3. `url_preprocessing`: Handles URL validation, normalization, and cleaning tasks. |
21 |
| -4. `website_analysis`: Analyzes website DOM, identifies relevant elements, and detects APIs through network traffic analysis for data extraction. |
22 |
| -5. `data_extraction`: Executes the generated scraper and extracts data from the target website. |
| 9 | +- Python 3.x |
| 10 | +- The required Python packages specified in the `requirements.txt` file |
| 11 | +- An API key for the OpenAI GPT-4 |
23 | 12 |
|
24 | 13 | ## Installation
|
25 | 14 |
|
26 |
| -To install the project dependencies, run the following command: |
| 15 | +1. Clone the project repository: |
27 | 16 |
|
28 |
| -``` |
29 |
| -pip install -r requirements.txt |
30 |
| -``` |
| 17 | + ```shell |
| 18 | + git clone https://github.com/dirkjbreeuwer/gpt-automated-web-scraper |
| 19 | + ``` |
31 | 20 |
|
32 |
| -Next, copy the `config.json.example` file to `config.json` and enter your GPT-4 API key in the `gpt4` section: |
| 21 | +2. Navigate to the project directory: |
33 | 22 |
|
34 |
| -```json |
35 |
| -{ |
36 |
| - "gpt4": { |
37 |
| - "api_key": "your-api-key-here" |
38 |
| - } |
39 |
| -} |
40 |
| -``` |
| 23 | + ```shell |
| 24 | + cd gpt-automated-web-scraper |
| 25 | + ``` |
41 | 26 |
|
42 |
| -## Usage |
43 |
| - |
44 |
| -You can analyze the network traffic of websites using the NetworkAnalyzer class provided in the `./website_analysis/network_analysis.py` file. Here's an example of how to use the class: |
45 |
| - |
46 |
| -```python |
| 27 | +3. Install the required Python packages: |
47 | 28 |
|
48 |
| -from website_analysis.network_analysis import NetworkAnalyzer |
| 29 | + ```shell |
| 30 | + pip install -r requirements.txt |
| 31 | + ``` |
49 | 32 |
|
50 |
| -# URL of the website to analyze |
51 |
| -url = "https://www.example.com" |
| 33 | +4. Set up the OpenAI GPT-3 API key: |
| 34 | + |
| 35 | + - Obtain an API key from OpenAI by following their documentation. |
| 36 | + - Rename the file called `.env.example` to `.env` in the project directory. |
| 37 | + - Add the following line to the `.env` file, replacing `YOUR_API_KEY` with your actual API key: |
52 | 38 |
|
53 |
| -# User requirements for the data extraction (currently not used) |
54 |
| -user_requirements = {} |
| 39 | + ```plaintext |
| 40 | + OPENAI_API_KEY=YOUR_API_KEY |
| 41 | + ``` |
55 | 42 |
|
56 |
| -# Create a NetworkAnalyzer instance |
57 |
| -analyzer = NetworkAnalyzer(url, user_requirements) |
58 |
| - |
59 |
| -# Analyze the website |
60 |
| -analysis_results = analyzer.analyze_website() |
| 43 | +## Usage |
61 | 44 |
|
62 |
| -# Print the analysis results |
63 |
| -print(analysis_results) |
64 |
| -``` |
| 45 | +To use the AI Web Scraper, run the `gpt-scraper.py` script with the desired command-line arguments. |
65 | 46 |
|
66 |
| -You can also analyze multiple websites at once using the `analyze_websites` function provided in the same file. Just pass a list of website URLs as an argument: |
| 47 | +### Command-line Arguments |
67 | 48 |
|
68 |
| -```python |
| 49 | +The following command-line arguments are available: |
69 | 50 |
|
70 |
| -from website_analysis.network_analysis import analyze_websites |
| 51 | +- `--source`: The URL or local path to the HTML source to scrape. |
| 52 | +- `--source-type`: Type of the source. Specify either `"url"` or `"file"`. |
| 53 | +- `--requirements`: User-defined requirements for scraping. |
| 54 | +- `--target-string`: Due to the maximum token limit of GPT-4 (4k tokens), the AI model processes a smaller subset of the HTML where the desired data is located. The target string should be an example string that can be found within the website you want to scrape. |
71 | 55 |
|
72 |
| -# List of website URLs to analyze |
73 |
| -websites = [ |
74 |
| - "https://www.example1.com", |
75 |
| - "https://www.example2.com", |
76 |
| - "https://www.example3.com" |
77 |
| -] |
| 56 | +### Example Usage |
78 | 57 |
|
79 |
| -# Analyze the websites |
80 |
| -results = analyze_websites(websites) |
| 58 | +Here are some example commands for using the AI Web Scraper: |
81 | 59 |
|
82 |
| -# Print the analysis results |
83 |
| -print(results) |
| 60 | +```shell |
| 61 | +python3 gpt-scraper.py --source-type "url" --source "https://www.scrapethissite.com/pages/forms/" --requirements "Print a JSON file with all the information available for the Chicago Blackhawks" --target-string "Chicago Blackhawks" |
84 | 62 | ```
|
85 | 63 |
|
| 64 | +Replace the values for `--source`, `--requirements`, and `--target-string` with your specific values. |
86 | 65 |
|
87 |
| -## Testing |
88 | 66 |
|
89 |
| -Currently the project is still under development. This section will be updated once the project is ready for use. |
| 67 | +## License |
90 | 68 |
|
91 |
| -## Contributing |
| 69 | +This project is licensed under the [MIT License](LICENSE). Feel free to modify and use it according to your needs. |
92 | 70 |
|
93 |
| -We welcome contributions to improve the GPT-based Universal Web Scraper. Please feel free to submit issues, feature requests, and pull requests on the repository. |
0 commit comments