Usage

Overview

civic-scraper provides a command-line tool and underlying Python library that can be used to fetch metadata about government documents and help download those documents.

The command-line tool makes it easy to get started scraping for basic use cases, while the Python library offers a wider range of options for use in custom scripts.

Government agendas and other files downloaded by civic-scraper are saved to a a standard – but configurable – location in the user’s home directory (~/.civic-scraper on Linux/Mac).

Below are more details on using the Command line as well as writing Custom scripts.

Note

civic-scraper currently supports scraping of five software platforms: Civic Clerk, Civic Plus, Granicus, Legistar and PrimeGov.

Find a site to scrape

Before you can start scraping government documents, you must first pinpoint URLs for one or more agencies of interest. Alternatively, you may want to review our lists of known Civic Plus sites or Legistar sites to see if any agencies in your area use one of these platforms.

In addition to Civic Plus and Legistar, civic-scraper currently supports Civic Clerk, Granicus and PrimeGov.

If your target agency uses one of these platforms, you should be able to scrape the site by writing a Python script that uses the appropriate platform scraper class.

If your agency site is not currently supported, you can try reaching out to us to see if the platform is on our development roadmap. We also welcome open-source contributions if you want to add support for a new platform.

Command line

Once you install civic-scraper and find a site to scrape, you’re ready to begin using the command-line tool.

Note

To test drive examples below, you should replace <site URL> with a URL to a Civic Plus site, e.g. http://nc-nashcounty.civicplus.com/AgendaCenter.

Getting help

civic-scraper provides a scrape subcommand as the primary way to fetch metadata and files from government sites. You can use the tool’s --help flag to get details on the available options:

civic-scraper scrape --help

Basic usage

By default, civic-scraper checks a site for meetings that occur on the current day and generates a metadata CSV listing information about any available meeting agendas or minutes:

# Scrape current day and generate metadata CSV
civic-scraper scrape --url <site URL>

Download documents

civic-scraper does not automatically download agendas or minutes by default since, depending on the time period of the scrape and size of the documents, this could involve a large quantity of data.

You must explicitly tell civic-scraper to download documents by using the --download flag, which will fetch and save agendas/minutes to civic-scraper’s cache directory:

civic-scraper scrape --download --url <site URL>

Scrape by date

civic-scraper provides the ability to set a date range to support scraping documents from meetings in the past:

# Scrape docs from meetings in January 2020
civic-scraper scrape \
  --start-date=2020-01-01 \
  --end-date=2020-01-31 \
  --url <site URL>

Scrape multiple sites

If you need to scrape more than one site at a time, you can supply a CSV containing URLs to civic-scraper.

The input CSV must store site URLs in a column called url, similar to the list of known sites for the Civic Plus platform.

Let’s say we have a ca_examples.csv with two agencies in California:

state,url
ca, https://ca-alpinecounty.civicplus.com/AgendaCenter
ca, https://ca-anaheim.civicplus.com/AgendaCenter

You can scrape both sites by supplying the CSV’s path to the --urls-file flag:

# Scrape current day for URLs listed in CSV (should contain "url" field)
civic-scraper scrape --urls-file ca_examples.csv

Store scraping artifacts

As part of the scraping process, civic-scraper acquires “intermediate” file artifacts such as HTML pages with links to meeting agendas and minutes.

We believe it’s important to keep such file artifacts for the sake of transparency and reproducibility.

Use the --cache flag to store these files in the civic-scraper cache directory:

civic-scraper scrape --cache  --url <site URL>

Putting it all together

The command-line options mentioned above can be used in tandem (with the exception of --url and --urls-file, which are mutually exclusive).

For example, the below command:

civic-scraper scrape \
  --cache \
  --download \
  --start-date=2020-01-01 \
  --end-date=2020-01-31 \
  --url <site URL>

would performing the following actions:

  • Generate a [Metadata CSV] on available documents for meetings in January 2020

  • Download agendas and minutes for meetings in the specified date range

  • Cache the HTML of search results pages containing links to agendas/minutes

Custom scripts

civic-scraper provides an importable Python package for users who are comfortable creating their own scripts. The Python package provides access to a wider variety of features for added flexibility and support for more advanced scenarios (e.g controlling the location of downloaded files or avoiding download of excessively large files).

Note

In order to use civic-scraper in a script, you must install the package and import one of the platform scraper classes. In the examples below, we use the CivicPlusSite class. See the platforms folder on GitHub for other available platform classes.

Site classes may support slightly different interfaces/features due to differences in features on each platform.

It’s a good idea to review the docstrings and methods for a class before attempting to use it.

Scrape metadata

Once you install civic-scraper and find a site to scrape, you’re ready to begin using the civic_scraper Python package.

Note

Below we use East Palo Alto, CA as an example. More agencies can be found in the list of known sites for the Civic Plus platform.

Create an instance of CivicPlusSite by passing it the URL for an agency’s CivicPlus Agenda Center site. Then call the scrape method:

from civic_scraper.platforms import CivicPlusSite
url = 'https://ca-eastpaloalto.civicplus.com/AgendaCenter'
site = CivicPlusSite(url)
assets_metadata = site.scrape()

Note

CivicPlusSite is an alias for more convenient import of the actual Civic Plus class located at civic_scraper.platforms.civic_plus.site.Site.

CivicPlusSite.scrape will automatically store downloaded assets in the default cache directory.

This location can be customized by setting an environment variable or by passing an instance of civic_scraper.base.cache.Cache to CivicPlusSite:

from civic_scraper.base.cache import Cache
from civic_scraper.platforms import CivicPlusSite

url = 'https://ca-eastpaloalto.civicplus.com/AgendaCenter'

# Change output dir to /tmp
site = CivicPlusSite(url, cache=Cache('/tmp'))
assets_metadata = site.scrape()

Export metadata to CSV

By default, CivicPlusSite.scrape returns an AssetCollection containing Asset instances.

The asset instances store metadata about specific meeting agendas and minutes discovered on the site.

To save a timestamped CSV containing metadata for available assets, call AssetCollection.to_csv() with a target output directory:

# Save metadata CSV
assets_metadata.to_csv('/tmp/civic-scraper/metadata')

Download assets

There are two primary ways to download file assets discovered by a scrape.

You can trigger downloads by passing download=True to CivicPlusSite.scrape:

site.scrape(download=True)

Or you can loop over the Asset instances in an AssetCollection and call download() on each with a target output directory:

assets_metadata = site.scrape()
for asset in assets_metadata:
    asset.download('/tmp/civic-scraper/assets')

Scrape by date

By default, scraping checks the site for meetings on the current day (based on a user’s local time).

Scraping can be modified to capture assets from different date ranges by supplying the optional start_date and/or end_date arguments to CivicPlusSite.scrape.

Their values must be strings of the form YYYY-MM-DD:

# Scrape info from January 1-30, 2020
assets_metadata = site.scrape(start_date='2020-01-01', end_date='2020-01-30')

Note

The above will not download the assets by default. See download assets script for details on saving the discovered files locally.

Advanced configuration

You can exercise more fine-grained control over the size and type of files to download using the file_size and asset_list arguments to CivicPlusSite.scrape:

# Download only minutes that are 20MB or smaller
site.scrape(
  download=True,
  file_size=20,
  asset_list=['minutes']
)

Here are more details on the parameters mentioned above:

  • file_size - Limit downloads to files with max file size in megabytes.

  • asset_list - Limit downloads to one or more [asset types] (described below in Metadata CSV). The default is to download all document types.

Metadata CSV

civic-scraper provides the ability to produce a CSV of metadata about agendas, minutes and other files discovered during a scrape. The file is automatically generated when using the command line and can be exported using AssetCollection.to_csv in the context of a custom script.

The generated file contains the following information:

  • url (str) - The download link for an asset

  • asset_name (str) - The title of an asset. Ex: City Council Special Budget Meeting - April 4, 2020

  • committee_name (str) - The name of the committee that generated the asset. Ex: City Council

  • place (str) - Name of the place associated with the asset (lowercased, punctuation removed). Ex: eastpaloalto

  • state_or_province (str) - The lowercase two-letter abbreviation for the state or province associated with an asset

  • asset_type (str) - One of the _`asset types` for meeting-related documents:

    • agenda

    • minutes

    • audio

    • video

    • agenda_packet - The exhibits and ancillary documents attached to a meeting agenda.

    • captions - The transcript of a meeting recording.

  • meeting_date (str) - Date of meeting or blank if no meeting date given in the format YYYY-MM-DD.

  • meeting_time (str) - Time of meeting or blank if no time given.

  • meeting_id (str) - A unique meeting ID assigned to the record.

  • scraped_by (str) - Version of civic-scraper that produced the asset. Ex: civicplus_v0.1.0

  • content_type (str) - The MIME type of the asset. Ex: application/pdf

  • content_length (str) - The size of the asset in bytes.

Changing the download location

By default, civic-scraper will store downloaded agendas, minutes and other files in a default directory.

You can customize this location by setting the CIVIC_SCRAPER_DIR environment variable.