Content Writing Prompt: List of United States tornadoes in April 2019
In 2019, 1,522 confirmed tornado (tornadic?) events took place in these United States.
Two hundred seventy-six (18.13%) took place in April of that year.
When categorized by the wonderfully rad-sounding Enhanced Fujita Scale, the severity of April’s campaigns broke down as such:
EFU*:18 (6.521%)
EF0: 92 (33.3%)
EF1: 127 (46%)
EF2: 34 (12.31%)
EF3: 5 (1.8%)
EF4: 0 (0%)
EF5: 0 (0%)
Here we’ve graphed said breakdown of tornadic severity (April 2018 included for reference):

We can clearly see April's tornadic activity was higher in 2019 than in the previous year.
Now look at this same breakdown of (confirmed) tornadoes by state for both years:

That said (shown?), let’s pause for a moment here. How does it feel having learned all this together? Has all this learning led to any knowing? Shouldn’t some learning default right to some knowing? Two things, briefly, here:
1. It makes sense to me that learning (if done successfully) leads to knowing. But that knowing, by itself, isn’t good for much. It’s fuel in search of fire.
2. Visual representations of what we’ve learned can be as dangerous as they are helpful. This is because learnings visualized, as we’ve done here, infuse mere observations with consequentiality. (I.e., Look. We have charts. What we’re looking at here must mean...something.)
The dangers here are amplified by the very human aversion to being the only one in the room to raise their hand and admit they don’t understand what something means.
So, what are we to do?
Well, once you’re staring at charts in slides projected on walls, your options are limited. Time and money have been spent. Copy has been proofread. Egos and job titles are now anteed on the table.
Despite being warmly encouraged, questions at this point will often be seen as obstructionist and/or a complete and total buzzkill.
While easier said than done, it’s better to get ahead of the data gathering. What problem do we have that we’re trying to solve, and how can data (and, by extension, cool charts) help us solve the problem (or support a solve)?
Diving into the data looking for the problem to solve is like opening the fridge without knowing what you need to cook.
Best to START with a problem.
But then, once you’ve got your data in play, take a moment and ask yourself, “How does what we have in front of us change what we should recommend or not recommend?” If your data and your charts don’t provide directional intelligence then (most likely) your initial problem wasn’t problematic enough.
Point being, don’t blame the data. But also don’t blindly count on it. Data just is.
Much like the weather.