Title: | Lifebrain Global Brain Health Survey Data |
---|---|
Description: | Between June 2019 and August 2020, Lifebrain conducted the Global Brain Health Survey to collect data on people’s perceptions of brain health and willingness to take care of their brain by adopting new lifestyles. The survey was conducted online and translated into 14 languages to reach as many people as possible. In total, it collected 27,590 responses from people in 81 countries. This package contains code and data from this survey. |
Authors: | Athanasia Mo Mowinckel [aut, cre] |
Maintainer: | Athanasia Mo Mowinckel <[email protected]> |
License: | CC BY 4.0 + file LICENSE |
Version: | 0.0.1.9000 |
Built: | 2024-11-20 03:51:13 UTC |
Source: | https://github.com/lifebrain/gbhs |
Between June 2019 and August 2020, Lifebrain conducted the Global Brain Health Survey to collect data on people’s perceptions of brain health and willingness to take care of their brain by adopting new lifestyles. The survey was conducted online and translated into 14 languages to reach as many people as possible. In total, it collected 27,590 responses from people in 81 countries.
data(gbhs)
data(gbhs)
An object of class tbl_df
(inherits from tbl
, data.frame
) with 27590 rows and 107 columns.
A data.frame with all responses
Budin-Ljøsne I, Friedman BB, Suri S, Solé-Padullés C, Düzel S, Drevon CA, Baaré WFC, Mowinckel AM, Zsoldos E, Madsen KS, Carver RB, Ghisletta P, Arnesen MR, Bartrés Faz D, Brandmaier AM, Fjell AM, Kvalbein A, Henson RN, Kievit RA, Nawijn L, Pochet R, Schnitzler A, Walhovd KB and Zasiekina L (2020) The Global Brain Health Survey: Development of a Multi-Language Survey of Public Views on Brain Health. Front. Public Health 8:387. doi: 10.3389/fpubh.2020.00387 (DOI)
data(gbhs)
data(gbhs)
Several questions in the GBHS can be made into long format data, as they either contain data from multiple choice questions (each answer separated with a ';') or come from a group of questions exploring the same theme with the same response scale. This function collects these questions and responses into dedicated columns where the "key" column in the question asked and responses are stored in "value" ( response category), "continuous" (ordinal scale), and "bin" (binary scale). All other data remain in the data frame, but the number of rows is increased, and the "submission_id" column denotes the individual respondent.
gbhs_long_q(data, question)
gbhs_long_q(data, question)
data |
data.frame to work on. Needs to be a
|
question |
integer indicating which question
to make the data longer from. Values accepted are
|
data frame with long data
data(gbhs) gbhs_long_q(gbhs, 2) gbhs_long_q(gbhs, 4)
data(gbhs) gbhs_long_q(gbhs, 2) gbhs_long_q(gbhs, 4)
Get path of child document
gbhs_path_child(path = NULL, ...)
gbhs_path_child(path = NULL, ...)
path |
filename of child document to get path of. If NULL, lists possibilities |
... |
other arguments to |
string of file path
This function is adapted from readxl::readxl_example()
.
gbhs_path_child() gbhs_path_child("bin_desc.Rmd") gbhs_path_child("ord_mod.Rmd")
gbhs_path_child() gbhs_path_child("bin_desc.Rmd") gbhs_path_child("ord_mod.Rmd")
The raw data from the survey is stored in individual files for each survey language. These are not cleaned or harmonised, as there are small inconsistencies in coding between the languages.
gbhs_path_data(path = NULL, type = "clean", destination = NULL, ...)
gbhs_path_data(path = NULL, type = "clean", destination = NULL, ...)
path |
Name of file in quotes with extension. If |
type |
type of data to look up. Either "clean" (default) or "raw" |
destination |
optional string indicating where to copy the file to |
... |
other arguments to |
string of file path
This function is adapted from readxl::readxl_example()
.
gbhs_path_data() gbhs_path_data("114338_en.tsv") head(read.delim(gbhs_path_data("114338_en.tsv"))) head(read.delim(gbhs_path_data("114338_en.tsv", "raw")))
gbhs_path_data() gbhs_path_data("114338_en.tsv") head(read.delim(gbhs_path_data("114338_en.tsv"))) head(read.delim(gbhs_path_data("114338_en.tsv", "raw")))
Get path to meta-data and codebooks
gbhs_path_meta(path = NULL, type = "codebook", ...)
gbhs_path_meta(path = NULL, type = "codebook", ...)
path |
filename of utility file to get path of. If NULL, lists possibilities |
type |
either "codebook" or "meta-data" |
... |
other arguments to |
string of file path
This function is adapted from readxl::readxl_example()
.
gbhs_path_meta() gbhs_path_meta("131674_ch.json") gbhs_path_meta(type = "meta-data") gbhs_path_meta("131674_ch.json", type = "meta-data")
gbhs_path_meta() gbhs_path_meta("131674_ch.json") gbhs_path_meta(type = "meta-data") gbhs_path_meta("131674_ch.json", type = "meta-data")
There are two basic type of template files, one descriptive and one with models. These are based on the exploration and testing of the data towards our publicised manuscripts and reports. To run the "model" documents, the corresponding "descriptive" document for that paper must have been previously run.
gbhs_path_rmd(type = "descriptives", paper = 1, destination = NULL)
gbhs_path_rmd(type = "descriptives", paper = 1, destination = NULL)
type |
either "descriptive" (default) or "model" |
paper |
an integer of either 1,2 or 3. |
destination |
optional string indicating where to copy the file to |
string of file path
This function is adapted from readxl::readxl_example()
.
gbhs_path_rmd() gbhs_path_rmd("descriptive", 2) gbhs_path_rmd("model", 3)
gbhs_path_rmd() gbhs_path_rmd("descriptive", 2) gbhs_path_rmd("model", 3)
Get path of utility functions
gbhs_path_utilities(path = NULL, ...)
gbhs_path_utilities(path = NULL, ...)
path |
filename of utility file to get path of. If NULL, lists possibilities |
... |
other arguments to |
string of file path
This function is adapted from readxl::readxl_example()
.
gbhs_path_utilities() gbhs_path_utilities("data-utils.R") gbhs_path_utilities("model-utils.R")
gbhs_path_utilities() gbhs_path_utilities("data-utils.R") gbhs_path_utilities("model-utils.R")
Descriptives and models for the GBHS data can be explored by generating the pre-created report templates.
gbhs_render_report( data = gbhs, type = "desc", paper = 1, output_dir = ".", ... )
gbhs_render_report( data = gbhs, type = "desc", paper = 1, output_dir = ".", ... )
data |
data to be used. Can be a subselection
of the gbhs data, or the entire |
type |
either "descriptive" (default) or "model" |
paper |
an integer of either 1,2 or 3. |
output_dir |
Directory to output the document to. |
... |
other arguments to |
creats a report using the data and GBHS template
## Not run: gbhs_render_report(type = "desc", paper = 1) gbhs_render_report(type = "desc", paper = 2) gbhs_render_report(type = "mod", paper = 1) ## End(Not run)
## Not run: gbhs_render_report(type = "desc", paper = 1) gbhs_render_report(type = "desc", paper = 2) gbhs_render_report(type = "mod", paper = 1) ## End(Not run)
Barchart for GBHS data
ggbar(data, grouping = NULL)
ggbar(data, grouping = NULL)
data |
GBHS data to plot |
grouping |
Grouping variable |
ggplot object
ggbar(gbhs_long_q(gbhs, 2))
ggbar(gbhs_long_q(gbhs, 2))
Utility function to plot model output from the GBHS survey
ggmodel(data, y, reverse = FALSE)
ggmodel(data, y, reverse = FALSE)
data |
data as prepared from |
y |
What goes on the y-axis |
reverse |
Should the scale be reversed |
ggplot object
Create a stacked bar chart
ggstacked( data, y = key, npos = 1.1, min_pc = 0.05, pattern = NULL, n_breaks = 2, text_size = 3 )
ggstacked( data, y = key, npos = 1.1, min_pc = 0.05, pattern = NULL, n_breaks = 2, text_size = 3 )
data |
data to plot |
y |
value for the y-axis |
npos |
position of sidebar text |
min_pc |
minimum percent to display text of |
pattern |
regex pattern to use with |
n_breaks |
number of break points |
text_size |
text size |
ggplot object
Wrap a ggstacked plot
ggstacked_wrap(data, y, ...)
ggstacked_wrap(data, y, ...)
data |
data to wrap |
y |
y-axis variable |
... |
other arguments to |
ggplot object
Transform logit to probability
logit2prob(logit)
logit2prob(logit)
logit |
a vector of logit scale |
a vector of probabilities
logit2prob(c(0.5, 1, 1.5))
logit2prob(c(0.5, 1, 1.5))
In part of the cleaning process, we needed to easily remove columns that only contained NA values.
na_col_rm(data)
na_col_rm(data)
data |
data.frame with data |
data.frame without columns that only have NA values
na_col_rm(mtcars)
na_col_rm(mtcars)
Calculate percent
pc(x)
pc(x)
x |
vector to calculate the percent from |
a vector of percents given the vectors whole.
pc(1:10)
pc(1:10)
Pretty percent displaying
percent(x, accuracy = 1, ...)
percent(x, accuracy = 1, ...)
x |
vector of numbers |
accuracy |
accuracy of the percent |
... |
other arguments to |
character vector with percentage sign at the end
percent(10)
percent(10)
Models run come in an output that require a little work to get plotted. This function helps clean up and get the data prepared for plotting in particular.
prep_model_output(data, model, y, reverse = FALSE)
prep_model_output(data, model, y, reverse = FALSE)
data |
Data used in the model |
model |
model output |
y |
what goes on the y-axis |
reverse |
whether the categorical scale should be reversed |
fortified data for plotting
There were in all 16 surveys launched for the global brain health survey. These were in different languages to try to capture as many respondents as possible, especially in Europe.
surveys()
surveys()
a tibble with 16 rows and 3 columns
surveys()
surveys()
Format thousands
thousand(x)
thousand(x)
x |
numeric vector |
character vector where one thousands has a space to the next number
thousand(c(1000, 150000, 16000))
thousand(c(1000, 150000, 16000))