Templates Government & Public Data SEC EDGAR Filings Scraper
📄

SEC EDGAR Filings Scraper

Official

Government & Public Data

4.70
6200 runs

Extract SEC filings (10-K, 10-Q, 8-K) from EDGAR with company info and filing details.

Free

1 credit per run

Run this Template
Version 1.0.0
Updated Mar 1, 2026
Credits per run 1

Data Fields

This template extracts the following data fields from each page.

Field Type
company text
cik text
filing_type text
filing_date date
description text
url url
document_count number

Configuration

input_params

[{'name': 'company_names', 'type': 'textarea', 'label': 'Company names or CIK numbers', 'required': True}]

How to use the SEC EDGAR Filings Scraper

1

Enter your URL

Paste the target page URL or search query into the input field.

2

Hit Run

The template automatically handles rendering, extraction, and anti-bot bypass.

3

Download data

Get structured results as JSON, CSV, or Excel. Or use our API for automation.

Frequently asked questions

Sign up for a free CrawlerAPI account, navigate to the SEC EDGAR Filings Scraper template, enter your target URL, and click Run. Results are returned in seconds as structured JSON, CSV, or Excel data.
Yes, the SEC EDGAR Filings Scraper is free to use. Each run costs 1 credit. Free accounts include 1,000 credits.
Extract SEC filings (10-K, 10-Q, 8-K) from EDGAR with company info and filing details. Extracted fields include: company, cik, filing_type, filing_date, description, url, document_count.
Yes, all templates can be run programmatically via the CrawlerAPI REST API. Send a POST request to /api/v1/templates/41/run/ with your API key and input parameters.
Yes. CrawlerAPI uses headless browsers (Playwright) to fully render JavaScript-heavy pages before extracting data. This handles SPAs, lazy-loaded content, infinite scroll, and dynamically generated elements.
Results can be downloaded as JSON, CSV, or Excel (XLSX). The API also returns data as structured JSON by default, which you can transform into any format you need.
CrawlerAPI uses proxy rotation, automatic User-Agent rotation, and intelligent rate limiting to help avoid blocks.
Yes. You can schedule any template to run on a recurring basis -- hourly, daily, weekly, or with a custom cron expression. Results are stored and can be delivered via webhook to your endpoint.
Templates support single URL processing. Use the API for batch operations.
Most templates return results in 2-10 seconds for a single URL. JavaScript-rendered pages may take slightly longer (5-15 seconds). Batch jobs run in parallel for maximum throughput.
No coding is required. Templates provide a simple web interface where you paste a URL and click Run. For developers who want automation, we also offer a full REST API with code examples in Python, Node.js, and cURL.
Scraping publicly available data is generally legal, as affirmed by the US Ninth Circuit in hiQ Labs v. LinkedIn. However, you should always respect each website's Terms of Service and robots.txt. CrawlerAPI provides the tools -- you are responsible for using them in compliance with applicable laws.

Ready to get your data?

Just enter a URL and hit run. No coding, no setup -- results in seconds.