Saving the WSPR database to PostgreSQL

I am an avid ham radio operator, and frequently have conversation with folks all around the world with less power than is required to light an incandescent light bulb. I use various modes of communication to do so, including voice, and digital methods such as Hellschreiber and Olivia. One of the modes I implement is a one-way beacon protocol called Weak Signal Propogation Reporter, or WSPR. This mode involves long and slow transmission of minimal data including call sign, grid square, and power. A project I’m working on required saving the WSPR database, so I wrote some scripts to do just that.

The WSPR website hosts CSV files of each and every contact made via the mode, which I wanted to download for a project I’m working on. So, I wrote a collection of tools for downloading the data and saving it to PostgreSQL. The scripts also add approximate locations (Maidenhead grid square centroids) to the WSPR database.

To use these tools, you’ll need the following modules installed. I recommend using Anaconda, but you can install them via pip using the included requirements.txt file. This tutorial assumes that you have the PostGIS extension installed on the database to which you’re going to import the data.

  • SQLAlchemy/GeoAlchemy
  • Fiona
  • Shapely
  • psycopg2
  • requests
  • BeautifulSoup

First, you’ll need to import the Maidenhead grid squares into the database. Follow these steps to do so:

  1. First, you’ll need to create a configuration file for the PostgreSQL credentials. Run python create_config.py and you’ll be prompted for the relevant information.
  2. Now, you’ll need to do download the Maidenhead grid square data (Click the Download Document button).
  3. Next, clone the GeoWSPR repository by running git clone https://github.com/minorsecond/GeoWSPR.git.
  4. You need the modules, so run pip install -r requirements.txt.
  5. You’ll then want to open the wspr_pg_database/__init__.py file and change the PostgreSQL://{username:password@IP}/{db_name} to your values.
  6. Next, open the GeoWSPR/maidenhead_to_pg.py file and do the same.
  7. Run python maidenhead_to_pg.py and enter the root path to your Maidenhead grid square Shapefiles when prompted. This step will take quite a long time, so expect to wait.

Now, you’ll download and extract the WSPR CSV files from the website, by following these steps:

  1. Run python downloader.py.
  2. Enter a space-separated list of the files you wish to download. For example, to download the first the only, enter `1 2 3`. If you want to download all of the files, enter 0. Press enter.
  3. Choose your output location and press enter.
  4. The files will begin downloading & extracting. Check on the progress from time to time as the website may rate limit the script and cause it to fail. If this happens, the script will tell you which file failed and will start back at the menu after you press enter.
  5. After the script is finished, ensure all of the expected files are in the output directory.

The final step is to import the contacts into the database. To do so, you’ll run the csv_to_pg.py script.

  1. Run python csv_to_pg.py.
  2. Enter the location of the CSV files. This should be the same location you entered in step 3 of downloader script instructions.
  3. Press enter and wait. This stage takes a considerable amount of time and it might be best to run it on an unused machine that won’t be interrupted.

That’s it! You now have a PostGIS database of WSPR contacts.

Note: if you tune the chunk size in csv_chunk(), you may get speedier importing. I have it set to 1000, but you may be able to go as high as 5000 (or even higher). See below:

def csv_chunk(csv_path, n_rows, processed_rows, processed_files):
    chunksize = 1000
    current_file_rows = 0
    for chunk in pd.read_csv(csv_path, chunksize=chunksize, header=None):
        process(chunk)
        processed_rows += chunksize
        current_file_rows += chunksize
        print("Processed {0} rows out of {1} from {2}. {3} total rows from {4} files".
              format(current_file_rows, n_rows, csv_path, processed_rows, processed_files))

    return processed_rows

Leave a Reply