2.1 Data Collection and Descriptive Statistics (Part 2)#

Summarizing and Visualizing One Qualitative Variable#

In the upcoming discussion, we will be exploring into a dataset that encompasses both qualitative and quantitative variables. Our initial focus will be on the qualitative aspect of the data, where we aim to systematically summarize and subsequently visualize these categorical variables.

We explore the freMTPL2freq dataset from the R package CASdatasets, which comprises a French motor third-party liability (MTPL) insurance portfolio with observed claim counts in a single accounting year.

  1. IDpol: Unique identifier representing the policy number.

  2. ClaimNb: Number of claims associated with the policy.

  3. Exposure: Total exposure in yearly units, indicating the duration of the policy.

  4. Area: Categorical variable denoting the area code.

  5. VehPower: Ordered categorical variable representing the power of the car.

  6. VehAge: Age of the car in years.

  7. DrivAge: Age of the driver in years.

  8. BonusMalus: Bonus-malus level ranging from 50 to 230, with a reference level of 100.

  9. VehBrand: Categorical variable indicating the car brand.

  10. VehGas: Binary variable indicating whether the car uses diesel or regular fuel.

  11. Density: Density of inhabitants per square kilometer in the city of the driver’s residence.

  12. Region: Categorical variable representing regions in France prior to 2016.

Tip

Students interested in understanding how different models, from traditional statistics to advanced machine learning, explain and predict the number of insurance claims, should check out the article ‘Case Study_ French Motor Third-Party Liability Claims (Noll, Salzmann, and Wuthrich [NSW20])’. It provides valuable insights into the topic and offers a great learning experience in insurance analytics.

Summarizing one qualitative variable involves describing the essential characteristics of a categorical data set. Here are several ways to summarize a qualitative variable:

1. Frequency Distribution#

  • Create a table that shows the count or frequency of each category in the qualitative variable.

  • Include both the absolute frequencies (counts) and relative frequencies (percentages).

suppressMessages({
  suppressWarnings(library(CASdatasets))
})


data(freMTPL2freq)
Error in library(CASdatasets): there is no package called CASdatasets
Traceback:

1. suppressMessages({
 .     suppressWarnings(library(CASdatasets))
 . })
2. withCallingHandlers(expr, message = function(c) if (inherits(c, 
 .     classes)) tryInvokeRestart("muffleMessage"))
3. suppressWarnings(library(CASdatasets))   # at line 2 of file <text>
4. withCallingHandlers(expr, warning = function(w) if (inherits(w, 
 .     classes)) tryInvokeRestart("muffleWarning"))
5. library(CASdatasets)
str(freMTPL2freq)
'data.frame':	677991 obs. of  12 variables:
 $ IDpol     : Factor w/ 677991 levels "1","3","5","10",..: 1 2 3 4 5 6 7 8 9 10 ...
 $ ClaimNb   : num  0 0 0 0 0 0 0 0 0 0 ...
 $ Exposure  : num  0.1 0.77 0.75 0.09 0.84 0.52 0.45 0.27 0.71 0.15 ...
 $ VehPower  : int  5 5 6 7 7 6 6 7 7 7 ...
 $ VehAge    : int  0 0 2 0 0 2 2 0 0 0 ...
 $ DrivAge   : int  55 55 52 46 46 38 38 33 33 41 ...
 $ BonusMalus: int  50 50 50 50 50 50 50 68 68 50 ...
 $ VehBrand  : Factor w/ 11 levels "B1","B10","B11",..: 4 4 4 4 4 4 4 4 4 4 ...
 $ VehGas    : chr  "Regular" "Regular" "Diesel" "Diesel" ...
 $ Area      : Factor w/ 6 levels "A","B","C","D",..: 4 4 2 2 2 5 5 3 3 2 ...
 $ Density   : int  1217 1217 54 76 76 3003 3003 137 137 60 ...
 $ Region    : Factor w/ 22 levels "Alsace","Aquitaine",..: 22 22 19 2 2 17 17 13 13 18 ...

Example 2.13 Summarizing one qualitative variable: freMTPL2freq dataset

What are the different categories present in the “Area” variable, and how frequently do each of these categories occur in the dataset? Provide both the absolute frequencies (counts) and the relative frequencies (percentages).

To answer Example 2.13 and create a frequency distribution for the “Area” variable in the “freMTPL2freq” dataset, you can use R. Here’s a step-by-step guide:

# Install and load the CASdatasets package
if (!requireNamespace("CASdatasets", quietly = TRUE)) {
  install.packages("CASdatasets")
}
suppressMessages({
  suppressWarnings(library(CASdatasets))
})

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# View the unique categories in the "Area" variable
unique_areas <- unique(freMTPL2freq$Area)
print(unique_areas)

# Create a frequency table
frequency_table <- table(freMTPL2freq$Area)

# Convert absolute frequencies to percentages
relative_frequencies <- prop.table(frequency_table) * 100

# Combine absolute and relative frequencies
result <- data.frame(Area = names(frequency_table),
                     Absolute_Frequency = as.numeric(frequency_table),
                     Relative_Frequency = as.numeric(relative_frequencies))

# Print the result
print(result)
[1] D B E C F A
Levels: A B C D E F
  Area Absolute_Frequency Relative_Frequency
1    A             103952           15.33236
2    B              75457           11.12950
3    C             191874           28.30038
4    D             151592           22.35900
5    E             137163           20.23080
6    F              17953            2.64797

This code snippet does the following:

  1. Installs and loads the CASdatasets package (if not already installed).

  2. Loads the “freMTPL2freq” dataset.

  3. Prints the unique categories in the “Area” variable.

  4. Creates a frequency table for the “Area” variable using the table() function.

  5. Converts absolute frequencies to relative frequencies using the prop.table() function.

  6. Combines absolute and relative frequencies into a data frame for a comprehensive result.

  7. Prints the final result, showing both absolute and relative frequencies.

Example 2.14 Summarizing one qualitative variable: freMTPL2freq dataset

Can you provide a breakdown of the distribution of car brands (“VehBrand”) in the dataset, highlighting the frequency of each brand? Show both the absolute frequencies (counts) and the relative frequencies (percentages).

Example 2.15 Summarizing one qualitative variable: freMTPL2freq dataset

For the “Region” variable, which represents regions in France, could you outline the frequency distribution of each region within the dataset? Present both the absolute frequencies (counts) and the relative frequencies (percentages).

2. Bar Charts#

  • Represent the frequencies visually using a bar chart.

  • Each category is represented by a bar, and the height of the bar corresponds to the frequency or percentage.

Tip

To create a bar chart for the “Area” variable in R, you can use the barplot() function. Here’s a step-by-step guide:

# Load necessary libraries
suppressMessages(library(CASdatasets))

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# Create a frequency table for the "Area" variable
frequency_table <- table(freMTPL2freq$Area)

# Convert absolute frequencies to percentages
relative_frequencies <- prop.table(frequency_table) * 100

# Combine absolute and relative frequencies into a data frame
result <- data.frame(Area = names(frequency_table),
                     Absolute_Frequency = as.numeric(frequency_table),
                     Relative_Frequency = as.numeric(relative_frequencies))

# Create a bar chart
barplot(frequency_table, main = "Frequency Distribution of Areas",
        xlab = "Area Categories", ylab = "Frequency", col = "skyblue")

# Add data labels
text(x = barplot(frequency_table, plot = FALSE), y = frequency_table + 1, labels = frequency_table, pos = 3, col = "black")

# Display the result
print(result)
  Area Absolute_Frequency Relative_Frequency
1    A             103952           15.33236
2    B              75457           11.12950
3    C             191874           28.30038
4    D             151592           22.35900
5    E             137163           20.23080
6    F              17953            2.64797
_images/a978ee201db906d49b3344355230cf69c8c29a0fbd57fabbbd6d810347799bf8.png

3. Pie Charts#

  • Display the relative frequencies of each category in a circular pie chart.

  • Each category is represented by a slice, and the size of the slice corresponds to the percentage of occurrences.

Tip

To create a pie chart for the “Area” variable in R, you can use the pie() function. Here’s a step-by-step guide:

# Load necessary libraries
suppressMessages(library(CASdatasets))

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# Create a frequency table for the "Area" variable
frequency_table <- table(freMTPL2freq$Area)

# Convert absolute frequencies to percentages
relative_frequencies <- prop.table(frequency_table) * 100

# Combine absolute and relative frequencies into a data frame
result <- data.frame(Area = names(frequency_table),
                     Absolute_Frequency = as.numeric(frequency_table),
                     Relative_Frequency = as.numeric(relative_frequencies))

# Create a pie chart
pie(frequency_table, labels = result$Area, main = "Pie Chart of Areas",
    col = rainbow(length(result$Area)), cex = 1.8)

# Display the result
print(result)
  Area Absolute_Frequency Relative_Frequency
1    A             103952           15.33236
2    B              75457           11.12950
3    C             191874           28.30038
4    D             151592           22.35900
5    E             137163           20.23080
6    F              17953            2.64797
_images/cd1e3629327d0611fe8ee308cf560cbe835cd4c6a465f3220336106645badc89.png

Tip

  1. The cxex parameter in the pie() function controls the size of the labels in the pie chart. Specifically, it stands for “character expansion,” and it is used to adjust the size of the labels relative to the default size.

  2. The pie() function in R does not automatically arrange the slices in descending or ascending order based on frequencies or percentages. It plots the slices in the order they appear in the data.

If you want to arrange the slices in descending order of frequencies or percentages, you should sort the data frame before creating the pie chart. Here’s an example:

# Sort the result data frame by Absolute_Frequency in descending order
result_sorted <- result[order(result$Absolute_Frequency, decreasing = TRUE), ]

# Create a pie chart with the sorted data
pie(result_sorted$Absolute_Frequency, labels = result_sorted$Area,
    main = "Pie Chart of Areas", col = rainbow(length(result_sorted$Area)), cex = 1.8)
_images/83cb2cef4720701a3b951b9fbc12e2a9e15d1f1ba2ce501d99b62c4cdc4fd55f.png

Tip

This code below uses ggplot2 to create a pie chart with labels that include both the “Area” and “Absolute_Frequency” values. You may need to adjust the size, colors, or other parameters based on your preferences.

Further information about pie charts with ggplot2 can be found at https://r-graph-gallery.com/piechart-ggplot2.html

# Install and load the ggplot2 package
if (!requireNamespace("ggplot2", quietly = TRUE)) {
  install.packages("ggplot2")
}
library(ggplot2)

# Create a data frame for ggplot2
df_ggplot <- data.frame(
  Area = names(frequency_table),
  Absolute_Frequency = as.numeric(frequency_table),
  Relative_Frequency = as.numeric(relative_frequencies)
)

# Create a pie chart with ggplot2
ggplot(df_ggplot, aes(x = "", y = Absolute_Frequency, fill = Area)) +
  geom_bar(stat = "identity", width = 1, color = "white") +
  coord_polar("y", start = 0) +
  theme_void() +
  geom_text(aes(label = paste0(Area, ": ", round(Relative_Frequency, 1), "%")),
            position = position_stack(vjust = 0.5), size = 5) +
  ggtitle("Pie Chart of Areas with Data Labels")
_images/6bcba86703bc4dcdf7fa1c65b9ccbefa0be175e3bd0ff0e3745d7c4ee8dcb003.png

Example 2.16 Visualizing one qualitative variable: freMTPL2freq dataset

How can we effectively visualize the distribution of car brands (‘VehBrand’) and regions (‘Region’) in the ‘freMTPL2freq’ dataset?

Using R, create both a bar chart and a pie chart for the ‘VehBrand’ variable, illustrating the counts of each car brand (if possible). Additionally, generate similar visualizations for the ‘Region’ variable, providing insights into the geographical distribution of insurance claims.

Example 2.17 Excel: Visualizing one qualitative variable

In Excel, how can we effectively summarize and visualize the ‘Area’ variable based on the first 100 observations of the dataset?

  1. Which specific ‘Area’ categories appear most frequently within the subset of the first 100 observations, and how do their frequencies differ from the entire dataset?

  2. How do the frequency distribution and visual representation of the ‘Area’ variable among the first 100 observations compare to the overall dataset?

In Excel,

  • one can analyze the ‘Area’ variable by initially selecting the first 100 observations to streamline the dataset.

  • Subsequently, one should employ Excel functions or PivotTables to compute the frequency distribution of ‘Area’ within this subset, facilitating a concise overview of categorical occurrences.

  • To enhance comprehension, one can then create visual representations, such as bar charts or pie charts, effectively illustrating the distribution patterns.

head(freMTPL2freq,100)
A data.frame: 100 × 12
IDpolClaimNbExposureVehPowerVehAgeDrivAgeBonusMalusVehBrandVehGasAreaDensityRegion
<fct><dbl><dbl><int><int><int><int><fct><chr><fct><int><fct>
11 00.105055 50B12RegularD 1217Rhone-Alpes
23 00.775055 50B12RegularD 1217Rhone-Alpes
35 00.756252 50B12Diesel B 54Picardie
41000.097046 50B12Diesel B 76Aquitaine
51100.847046 50B12Diesel B 76Aquitaine
61300.526238 50B12RegularE 3003Nord-Pas-de-Calais
71500.456238 50B12RegularE 3003Nord-Pas-de-Calais
81700.277033 68B12Diesel C 137Languedoc-Roussillon
91800.717033 68B12Diesel C 137Languedoc-Roussillon
102100.157041 50B12Diesel B 60Pays-de-la-Loire
112500.757041 50B12Diesel B 60Pays-de-la-Loire
122700.877056 50B12Diesel C 173Provence-Alpes-Cotes-D'Azur
133000.814127 90B12RegularD 695Aquitaine
143200.054027 90B12RegularD 695Aquitaine
153500.764923100B6 RegularE 7887Nord-Pas-de-Calais
163600.349044 76B12RegularF27000Ile-de-France
173800.106232 56B12Diesel A 23Centre
184200.776232 56B12Diesel A 23Centre
194400.746255 50B12RegularA 37Corse
204500.106255 50B12RegularA 37Corse
214700.036255 50B12RegularA 37Corse
224900.817073 50B12RegularE 3317Provence-Alpes-Cotes-D'Azur
235000.067073 50B12RegularE 3317Provence-Alpes-Cotes-D'Azur
245200.106827 76B3 Diesel B 85Provence-Alpes-Cotes-D'Azur
255300.555033100B12RegularD 1746Ile-de-France
265400.195033100B12RegularD 1746Ile-de-France
275500.015033100B12RegularD 1746Ile-de-France
285800.035059 50B12RegularC 455Languedoc-Roussillon
295900.795059 50B12RegularC 455Languedoc-Roussillon
306000.045059 50B12RegularC 455Languedoc-Roussillon
7114800.12 5055 50B12RegularC 480Picardie
7214900.06 5055 50B12RegularC 480Picardie
7315000.1115051 50B12Diesel C 480Picardie
7415100.0415055 50B12Diesel C 480Picardie
7515300.74 7059 50B12RegularC 364Midi-Pyrenees
7615500.12 7059 50B12RegularC 364Midi-Pyrenees
7715800.07 7051 50B12RegularD 721Provence-Alpes-Cotes-D'Azur
7815900.79 7051 50B12RegularD 721Provence-Alpes-Cotes-D'Azur
7916100.81 5053 50B12RegularE 3430Provence-Alpes-Cotes-D'Azur
8016300.05 5053 50B12RegularE 3430Provence-Alpes-Cotes-D'Azur
8116600.09 4055 50B12RegularE 2715Provence-Alpes-Cotes-D'Azur
8216700.77 4055 50B12RegularE 2715Provence-Alpes-Cotes-D'Azur
8317000.8710031 72B12Diesel F27000Ile-de-France
8417300.87 5065 50B12RegularD 645Languedoc-Roussillon
8517500.78 6147 53B2 RegularB 93Languedoc-Roussillon
8617800.04 6147 53B2 RegularB 93Languedoc-Roussillon
8717900.03 6147 53B2 RegularB 93Languedoc-Roussillon
8818100.69 5846 52B5 RegularE 3023Ile-de-France
8918300.12 5846 52B5 RegularE 3023Ile-de-France
9018400.04 5846 52B5 RegularE 3023Ile-de-France
9118600.83 5075 50B12RegularC 215Alsace
9218800.03 5075 50B12RegularC 215Alsace
9318900.5512550 60B12Diesel B 56Basse-Normandie
9419010.1412550 60B12Diesel B 56Basse-Normandie
9519300.77 7439 50B10Diesel A 30Aquitaine
9619400.09 7439 50B10Diesel A 30Aquitaine
9719500.1410067 95B12Diesel E 5460Ile-de-France
9819600.67 5022 90B12RegularD 1324Ile-de-France
9919700.13 4039100B12RegularE 6736Ile-de-France
10019800.59 4039100B12RegularE 6736Ile-de-France

Using PivotTables in Excel#

Using PivotTables in Excel to analyze the ‘Area’ variable based on the first 100 observations involves the following steps:

  1. Data Selection:

    • Open your Excel workbook and navigate to the worksheet containing the dataset.

    • Select the first 100 observations of the ‘Area’ variable, ensuring you include the column headers.

  2. Insert PivotTable:

    • Go to the “Insert” tab on the Excel ribbon.

    • Click on “PivotTable” and choose the range of the selected data. Ensure the “New Worksheet” option is selected.

  3. Configure PivotTable Fields:

    • In the newly created PivotTable, drag the ‘Area’ field to the “Rows” area. This will list unique ‘Area’ categories.

    • Drag the ‘Area’ field again to the “Values” area. This will default to counting the occurrences, providing the frequency distribution.

  4. Adjust PivotTable Options:

    • Customize the PivotTable by right-clicking on the count in the “Values” area, selecting “Value Field Settings,” and renaming it to “Frequency.”

  5. Create a PivotChart (Optional):

    • For a visual representation, select any cell within the PivotTable, go to the “Insert” tab, and choose “PivotChart.”

    • Choose the chart type (e.g., bar chart or pie chart) to visualize the frequency distribution.

By following these steps, students can efficiently use PivotTables to summarize and visualize the ‘Area’ variable based on the first 100 observations in Excel. The resulting table and optional chart provide a clear representation of the categorical distribution within the subset.

Summarizing and Visualizing One Quantitative Variable#

In the world of data analysis, diving into the characteristics of a dataset is essential for extracting meaningful insights. When we focus on summarizing and visualizing one quantitative variable, a crucial aspect is central tendency. This represents the typical value around which other data points cluster.

Descriptive measures like the mean and median offer insights into this central tendency. Another important aspect is understanding the variability in the data – how much the data points differ from the central value. Measures such as the range, interquartile range, and standard deviation quantify this spread. Visualization complements these measures, providing a graphical representation of the data’s distribution. This visual representation helps us identify patterns, outliers, and the overall shape of the dataset.

Essentially, these analytical tools collectively enable us to explain the complexities of quantitative variables, facilitating a deeper understanding of datasets.

In this analysis, we will use the same dataset that was employed when examining the qualitative variable. Summarizing and visualizing the variable “Exposure” in the freMTPL2freq dataset involves a series of steps:

Summarizing#

1. Descriptive Statistics#

Calculate basic descriptive statistics for “Exposure,” such as the mean, median, minimum, maximum, and standard deviation. This provides a summary of the central tendency and variability of the variable.

Measures of Central Tendency#

Measures of central tendency are essential statistical metrics that provide insights into the central or typical value within a dataset. These indicators help us understand the central position around which data points cluster. Commonly used measures of central tendency include the mean, median, and mode. Each of these measures offers a unique perspective on the central tendencies of a dataset, aiding in the interpretation and analysis of numerical information.

Mean#

The average, often referred to as the mean, is a fundamental measure of central tendency. It is calculated by summing up all the values in a dataset and dividing the sum by the total number of values. The mean provides a numeric representation of the central position of the data, making it a widely used and easily interpretable indicator.

Let \(x_1, x_2, x_3, \ldots, x_n\) be our sample. The sample mean \(\bar{x}\) is

\[\bar{x} = \frac{\sum_{i=1}^{n} x_i}{n}.\]

Tip

  1. The population mean, \(\mu\) (lowercase mu, Greek alphabet), is the mean of all \(x\) values for the entire population.

  2. The mean is sensitive to extreme values, making it important to consider the distribution and characteristics of the data when interpreting this central tendency measure.

Median#

The median is another important measure of central tendency. It represents the middle value in a dataset when arranged in ascending or descending order. To calculate the median, the dataset is first sorted, and then the middle value is determined. If the dataset has an even number of observations, the median is the average of the two middle values.

Let \(x_1, x_2, x_3, \ldots, x_n\) be our sample. The median is denoted as \(\tilde{x}\), and it is determined based on the position of the middle values in the ordered dataset.

Odd Number of Observations:

\[\tilde{x} = x_{\frac{n+1}{2}}\]

Even Number of Observations:

\[\tilde{x} = \frac{x_{\frac{n}{2}} + x_{\frac{n}{2}+1}}{2}\]

Tip

  1. The population median, \(M\) (uppercase mu in the Greek alphabet), is the data value in the middle position of the entire ranked population.

  2. The median is less sensitive to extreme values, making it a robust measure of central tendency that is not heavily influenced by outliers. This characteristic is particularly advantageous when dealing with datasets with skewed distributions or significant outliers.

Mode#

The mode is the value that occurs most frequently in a dataset. Unlike the mean and median, a dataset can have multiple modes, making it a multimodal distribution. If no value repeats, the dataset is considered to have no mode.

Let \(x_1, x_2, x_3, \ldots, x_n\) be our sample. The mode is denoted as \(Mo\) and it is the value(s) with the highest frequency.

If there is one mode: \(Mo = x_i\) with the highest frequency.

If there are multiple modes: \(Mo = \{x_1, x_2, \ldots, x_k\}\) where \(k\) is the number of modes.

Example 2.18 Descriptive statistics for the “Exposure” variable: freMTPL2freq dataset

What are the basic descriptive statistics for the “Exposure” variable in the freMTPL2freq dataset?

To explore and understand the characteristics of the “Exposure” variable in the freMTPL2freq dataset, we will use the summary() function in R. This function provides key descriptive statistics, such as the mean, median, minimum, maximum, and quartiles, offering a comprehensive overview of the distribution and central tendencies of the “Exposure” variable.

# Load necessary libraries
suppressMessages(library(CASdatasets))

# Load the freMTPL2freq dataset
data("freMTPL2freq")

summary(freMTPL2freq$Exposure)
    Min.  1st Qu.   Median     Mean  3rd Qu.     Max. 
0.002732 0.180000 0.490000 0.528743 0.990000 2.010000 

Measures of Position

Quartiles are specific values within a dataset that partition the ordered data into four equal parts. In any dataset, there are three quartiles.

  • The first quartile, denoted as Q1, is a value such that at most 25% of the data are smaller than Q1, and at most 75% are larger.

  • The second quartile is synonymous with the median, representing the middle value of the dataset.

  • The third quartile, denoted as Q3, is a value such that at most 75% of the data are smaller than Q3, and at most 25% are larger.

In essence, quartiles provide valuable insights into the distribution of data by identifying key points that divide it into quarters.

Measures of Variability#

Measures of variability complement measures of central tendency by providing insights into the spread or dispersion of data points within a dataset. Two commonly used measures of variability are Variance (\(\sigma^2\) or \(s^2\)) and Standard Deviation (\(\sigma\) or \(s\)). These metrics quantify the extent to which individual data points deviate from the central tendency, offering a comprehensive understanding of the data’s distribution.

Population Variance (\(\sigma^2\)):

\[\sigma^2 = \frac{\sum_{i=1}^{N} (X_i - \mu)^2}{N}\]

Sample Variance (\(s^2\)):

\[s^2 = \frac{\sum_{i=1}^{n} (x_i - \bar{x})^2}{n-1}\]

Population Standard Deviation (\(\sigma\)):

\[\sigma = \sqrt{\sigma^2}\]

Sample Standard Deviation (\(s\)):

\[s = \sqrt{s^2}\]

Where:

  • \(X_i\) or \(x_i\) represents each individual data point in the population or sample.

  • \(\mu\) is the population mean.

  • \(\bar{x}\) is the sample mean.

  • \(N\) is the total number of data points in the population.

  • \(n\) is the total number of data points in the sample.

These measures offer valuable insights into the variability of data points, helping to identify the degree of dispersion and assess the overall stability of the dataset.

Example 2.19 Measures of dispersion: freMTPL2freq dataset

What are the key measures of dispersion for the “Exposure” variable in the freMTPL2freq dataset?

To explore the variability and spread of the “Exposure” variable in the freMTPL2freq dataset, we can employ the sd() function in R for the standard deviation and the var() function for the variance. These measures provide valuable insights into how individual values deviate from the central tendency, contributing to a more comprehensive understanding of the dataset’s distribution.

# Load necessary libraries
suppressMessages(library(CASdatasets))

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# Calculate the standard deviation and variance for the "Exposure" variable
standard_deviation <- sd(freMTPL2freq$Exposure)
variance <- var(freMTPL2freq$Exposure)

# Output the measures of dispersion
cat("Standard Deviation:", standard_deviation, "\n")
cat("Variance:", variance, "\n")
Standard Deviation: 0.3644406 
Variance: 0.132817 

Tip

In R, one popular package that provides comprehensive summary statistics is the psych package. It offers the describe() function, which generates a detailed summary including

  • measures of central tendency,

  • dispersion, and

  • distribution shape for each variable in a dataset.

The output includes mean, standard deviation, skewness, kurtosis, minimum, 25th percentile, median, 75th percentile, and maximum.

Here is an example with the describe() function.

# Load the psych package
library(psych)

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# Use describe() to get comprehensive summary statistics for the dataset
summary_stats <- describe(freMTPL2freq)

# View the summary statistics
print(summary_stats)
           vars      n      mean        sd    median   trimmed       mad min
IDpol*        1 677991 338996.00 195719.29 338996.00 338996.00 251297.73   1
ClaimNb       2 677991      0.04      0.21      0.00      0.00      0.00   0
Exposure      3 677991      0.53      0.36      0.49      0.53      0.55   0
VehPower      4 677991      6.45      2.05      6.00      6.19      1.48   4
VehAge        5 677991      7.04      5.67      6.00      6.56      5.93   0
DrivAge       6 677991     45.50     14.14     44.00     44.73     14.83  18
BonusMalus    7 677991     59.76     15.64     50.00     56.34      0.00  50
VehBrand*     8 677991      5.05      3.07      4.00      4.89      4.45   1
VehGas*       9 677991      1.51      0.50      2.00      1.51      0.00   1
Area*        10 677991      3.29      1.38      3.00      3.33      1.48   1
Density      11 677991   1792.41   3958.57    393.00    921.94    526.32   1
Region*      12 677991     12.96      6.55     12.00     13.04      7.41   1
                 max     range  skew kurtosis     se
IDpol*     677991.00 677990.00  0.00    -1.20 237.70
ClaimNb        16.00     16.00  6.80   118.71   0.00
Exposure        2.01      2.01  0.09    -1.52   0.00
VehPower       15.00     11.00  1.17     1.67   0.00
VehAge        100.00    100.00  1.15     6.52   0.01
DrivAge       100.00     82.00  0.44    -0.34   0.02
BonusMalus    230.00    180.00  1.73     2.67   0.02
VehBrand*      11.00     10.00  0.14    -1.10   0.00
VehGas*         2.00      1.00 -0.04    -2.00   0.00
Area*           6.00      5.00 -0.18    -0.88   0.00
Density     27000.00  26999.00  4.65    24.87   4.81
Region*        22.00     21.00  0.04    -1.43   0.01

2. Frequency Distribution#

Create a frequency distribution to understand the distribution of different exposure values. This can help identify common exposure levels and potential outliers.

Frequency Distribution

Definition: A frequency distribution is a tabular representation of a set of data that shows the number of times each distinct value or range of values occurs in a dataset. It provides a systematic way to organize and summarize quantitative data, allowing for a clearer understanding of the data’s distribution and patterns.

Creating a Frequency Distribution for a Quantitative Variable: The process of creating a frequency distribution for a quantitative variable involves several steps:

  1. Identify the Range of Values: Determine the range of values present in the quantitative variable. This involves finding the minimum and maximum values.

  2. Divide the Range into Intervals (Bins): Divide the range of values into intervals or bins. Each bin represents a range of values, and the number of values falling within each bin will be counted.

  3. Count the Frequencies: Count the number of data points that fall into each bin. This is done by examining each data point and determining which bin it belongs to.

  4. Create a Table: Construct a table with two columns – one for the bins (intervals) and another for the frequencies. Each row in the table represents a bin, and the corresponding frequency indicates how many values fall within that bin.

  5. Display the Information: Present the frequency distribution in a clear and visually accessible format. This can be done through a table, histogram, or other graphical representation.

The resulting frequency distribution provides a snapshot of the data’s distribution, highlighting the concentration of values within specific intervals and revealing any patterns or outliers. It is a fundamental tool for exploring and summarizing quantitative data.

The rules of thumb for choosing the number of bins

When determining the number of bins for a histogram, the Square Root Rule is a straightforward guideline suggesting that the number of bins should be approximately equal to the square root of the total number of data points,

\[\text{Number of bins} ≈ \sqrt{n},\]

where \(n\) is the number of data points.

This provides a balanced representation, avoiding excessive detail or oversimplification. Additionally, alternative rules, such as Sturges’ Rule, Freedman-Diaconis Rule, Scott’s Rule, and Doane’s Formula, offer refined approaches.

Visualizing#

3. Histogram#

Construct a histogram to visualize the distribution of “Exposure.” The histogram provides a visual representation of the frequency of different exposure intervals, offering insights into the variable’s overall pattern.

Definition:

Histogram A histogram is a graphical representation of the distribution of a quantitative variable. It uses bars to represent the frequencies of different values or intervals. The bars are typically drawn adjacent to each other, with the width of each bar corresponding to the width of the interval, and the height corresponding to the frequency of observations in that interval.

Example 2.20 Frequency Distribution and Histogram: freMTPL2freq dataset

How is the “Exposure” variable distributed in the freMTPL2freq dataset?

To gain insights into the distribution of the “Exposure” variable, we can create a frequency distribution using appropriate bins.

The hist() function in R is a useful tool for generating histograms, allowing us to visualize the frequency of different exposure levels.

Additionally, we can use the table() function to create a frequency table displaying the ranges of data along with their respective counts.

This tabular representation enhances our understanding of the concentration of values and potential outliers within the dataset.

# Load necessary libraries
suppressMessages(library(CASdatasets))

# Load the freMTPL2freq dataset
data("freMTPL2freq")

# Create a histogram for the "Exposure" variable
hist_data <- hist(freMTPL2freq$Exposure, main = "Frequency Distribution of Exposure", xlab = "Exposure", ylab = "Frequency", col = "skyblue")

# Create a frequency table
frequency_table <- table(cut(freMTPL2freq$Exposure, breaks = hist_data$breaks, include.lowest = TRUE, right = TRUE))

# Display the frequency table
frequency_table
  [0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,0.5] (0.5,0.6] (0.6,0.7] (0.7,0.8] 
   127594     59973     62369     41184     62989     36998     32114     38023 
(0.8,0.9]   (0.9,1]   (1,1.1] (1.1,1.2] (1.2,1.3] (1.3,1.4] (1.4,1.5] (1.5,1.6] 
    24864    190659       819       202        80        41        40        13 
(1.6,1.7] (1.7,1.8] (1.8,1.9]   (1.9,2]   (2,2.1] 
       12         4         6         5         2 
_images/c88d0e48849c2f7123a23d09e532d0f6dfc27ad8400039dde3cf61c4d4d85df0.png

4. Density Plot#

Create a density plot to visualize the distribution of “Exposure.” Unlike a histogram, a density plot provides a smooth, continuous representation of the data distribution. It is especially useful for identifying patterns in the data and understanding the shape of the distribution.

Definition:

Density Plot A density plot is a smoothed representation of the distribution of a quantitative variable. It uses a continuous curve to depict the estimated probability density function. Density plots are advantageous for highlighting subtle features in the data distribution and are particularly useful when dealing with large datasets.

# Create a density plot for the "Exposure" variable
plot(density(freMTPL2freq$Exposure), main = "Density Plot of Exposure", xlab = "Exposure", col = "skyblue")
_images/d9144cd248e87ded0fc09616b57b6f5a0b099cd35182af3144ccb11470b429b4.png

5. Box Plot#

Generate a box plot to visualize the distribution of “Exposure.” A box plot provides a summary of the data distribution, displaying the median, quartiles, and potential outliers. It offers a concise visual representation of the central tendency and spread of the data.

Definition:

Box Plot A box plot is a visual summary of the distribution of a quantitative variable. It displays the median, quartiles, and potential outliers using a box-and-whisker format. Box plots are effective for identifying the central tendency and spread of the data, making them useful for comparing distributions.

# Create a box plot for the "Exposure" variable
boxplot(freMTPL2freq$Exposure, main = "Box Plot of Exposure", col = "skyblue")
_images/47d9942f88bfb71266727006a4eebc857b4fbe8cba42b174a1430a41b172f388.png

Outliers

In a box plot, outliers are individual data points that significantly differ from the overall pattern of the distribution. These points fall beyond the “whiskers” of the box plot, which are lines extending from the box. Outliers can provide valuable information about unusual or extreme values in the dataset.

Importance of Identifying Outliers:

Identifying outliers is important for several reasons:

  • Data Quality Assessment: Outliers may indicate errors in data collection or measurement.

  • Influential Impact: Outliers can disproportionately influence statistical analyses and model outcomes.

  • Understanding Variability: Outliers offer insights into the variability and potential patterns in the data.

Display on Box Plot:

In a box plot, outliers are typically shown as individual points beyond the “whiskers.” The whiskers represent the range within which most of the data falls, while any points beyond the whiskers are considered potential outliers. The exact definition of outliers and how they are displayed can depend on the method used to calculate them, but commonly, points beyond a certain distance from the quartiles are considered outliers.

Example 2.21 Descriptive statistics for the “Exposure” variable: freMTPL2freq dataset

Upon reviewing the descriptive summary statistics for the “Exposure” variable in the freMTPL2freq dataset using the summary() function, what patterns, central tendencies, and notable characteristics emerge? Analyzing the mean, median, quartiles, and other descriptive measures, what information can be inferred about the distribution of exposure values and their significance within the dataset?

Example 2.22 Excel: Exploring and Visualizing One Quantitative Variable

In Excel, how can we effectively summarize and visualize the ‘Exposure’ variable based on the first 100 observations of the dataset?

  1. Which specific exposure intervals or values are most common within the subset of the first 100 observations, and how do their frequencies differ from the entire dataset?

  2. How does the frequency distribution and visual representation of the ‘Exposure’ variable among the first 100 observations compare to the overall dataset?

Consider using appropriate Excel tools and functions to generate descriptive statistics, frequency distributions, and visualizations to gain insights into the distribution and characteristics of the ‘Exposure’ variable within the selected subset.