About the challenge
Mitigating the impact of climate change is one of the most urgent challenges of our century. Understanding where emissions come from, and how they are evolving over time is crucial to making progress. However, the amount of information can be overwhelming, especially when put in context with history and economics. The field of data science offers some tools to help us parse the massive reams of data, and extract insight from interactive visualizations. These visual representations of data can help us make informed arguments and decisions as we collectively work to preserve the health of our planet.
The DawsCon – DawsonAI data challenge this year is meant to help introduce us to the broad field of Data Journalism, wherein large datasets from various sources are compiled into a visual aid at the center of a story meant to inform the public. Inspired by the work done at Our World in Data, we will begin with publicly available C02 emissions data for Canada at the provincial level.
The overall goal of the challenge is to develop a news story which continues the analysis from the Warm-up above, but at the provincial level here in Canada.
Instructions for Warm-up (completed)
- Open a new jupyter notebook Google Colab (requires a Gmail account)
- In a separate window, open the Tutorial Notebook (see discord). The goal of this tutorial is to break down the code required to generate the graphs we see in the Our World in Data article, which gives the CO2 emissions over time, for any country, over the last 50 years.
- Cell by cell, follow the tutorial, and copy cell chunks over to your blank notebook. This will allow you to see for yourself how different code chunks work for yourself, without losing your original copy of the tutorial.
- Notice the use of different variables to contextualize the raw data: e.g. country population, economic output
See discord #challenge-resources for the tutorial recording.
The overall goal of the challenge is to develop a story with your data. Take inspiration from the analysis and examples from the Warm-up tutorial.
What to submit
- Data visual(s). A representative image of your data visualization, minimally. Use as many visualizations as necessary. Note that more is not necessarily better.
- 200-word “news story” article about your findings.
- Code used for analysis inside a Colab Notebook (submitted as link)
- 3-minute presentation will be given to the group of judges. Provide presentation slides (link or uploaded file).
- Try to use data driven arguments
- Be inspired by the original source: Our World in Data
- Explore the use of:
All teams will make 3 minute presentations to the group of judges. In-person and online.
Schedule to be provided after 11:59AM submissions are received.
UPDATE: Submission deadline extended to 1:00PM. However, first presentations to begin at approximately 1:00 PM (exact time to be announced).
$CAD900 in prizes
Data Journalism Grand Prize winner.
Honorable mention: Technical
For the best technical implementation of the data visualization.
Honourable mention: Aesthetic
For the most creative and/or aesthetically interesting data visualization.
Peoples' Choice prize
Voted for by the community.
Submitting to this hackathon could earn you:
How did you produce your visual(s)? Did teams rise to the technical challenge of manipulating data? Something innovative about the analysis? Is it remarkable that teams could hack this analysis together in just a day or two?
How "beautiful" or aesthetically interesting is your data visualization? Are visualizations beautiful/elegant/polished while coherently capturing the essentials of the story? Something unique about the visual? Creative? Crisp and clean?
The overall story. How effective/engaging/coherent is the story overall? How well did the team present? Is the communication of the data analysis and accompanying visuals + methods used clear and understandable?