Introduction
A measure of central tendency or measure of central location is a single value that tries to explain a data set by identifying the central position within a given data set. The mean, mode and median are the commonly used measures of central tendency (Ashenfelter, Levine & Zimmerman, 2006).
The mean
For a data set or distribution, the mean (or average) is the sum of all values in a given data set divided by the values in that data set (Weisberg, 2001). That means, when you have n value with values x1+x2+x3……….Xn, then the mean, typically denoted as is calculated as follows;
= (x1+x2+x3…….+xn)/n
Consider the following data set: 23, 27, 26,28,25,24, the mean will be;
= (23+27+26+28+25+24)/6
= 25.5
NB: the mean is the most meaningful and frequently used measure of central tendency.
The median
The median is defined as the numeric value that separates the higher half of a data set of a sample from the lower half. In finite numbers, the median is found by arranging the observations in an ascending order (from the lowest to the highest) and picking the middle value (Zaccagnini &Waud, 2011). Note, that if the number of the observations is odd then the median is the middle observation, but when the totals of the observation are even, then it means that there is no single middle value. In this case, the median is found by summing the two middle observations and dividing them by two.
Considering the following data, 23, 27, 26,28,25,24, the median will be;
23,24,25,26,27,28
Median=25+26/2
= 25.5
Mode
The mode is defined as the most frequent value in a given probability distribution or a data set. Similar to the mean and median, the mode provides means through which important information about a population or random variable is captured in a single quantity (Whitley, 2007). Given the following data set; 3, 4, 3,2,1,5, the mode is 3.
Differences
There are several differences between the discussed measures of central tendency. The obvious difference is the manner in which they are found or calculated.
Type 
Description 
example 
result 
Mean 
Sum of all values in a given data set divided by the values in that data set 
23+24+25+26+26+28/6

25.33 
Median 
The numeric value that separates the higher half of a data set of a sample from the lower half. 
23+24+25+26+26+28 =25+26/2 
25.5 
mode 
the most frequent value in a given probability distribution or a data set 
23+24+25+26+26+28

26 
Another difference between the mean, median and mode culminates from the type of used data. Unlike the median and the mean, the modal concept makes more sense in nominal data that does not consist of numerical values. Taking a sample of Korean names as an example, one might discover that the name “Park” occurs frequently than other names. In this case, Park would be the modal value of the given set. On the other hand, the mean is the most preferred measure of central tendency when the data set is distributed in a continuous and symmetrical manner. When working with skewed or ordinal data, the median is the most preferred measure of central tendency. Ashenfelter, Levine & Zimmerman (2006) contend that the mode can be used in these circumstances, but not as common as the median.
While the mode seems insensitive to outliers, the mean is sensitive and the median is very robust. This means that the median is the most preferred measure of central tendency in such situations because outliers usually distort the mean. Another difference between the mean, median and mode is that the mean and median can only have a single value for a given data set while the mode can have more values. This situation, sometimes referred to as multimodal, occurs when a data set has more than two modal values (Bevington & Robinson, 2003).
Nominal, ordinal, interval, and ratio data
Numbers are typically assigned to various attributes of people, objects, and concepts. This process, which is synonymous in behavioral and social sciences, is known as measurement. These measurements are defined by nominal, ordinal, interval and ratio data.
Nominal
The term nominal emanates from the Latin word nomen, which means name. Nominal data refers to discrete data that is categorical. Glenberg & Andrzejewski (2007) contend that items being measured in a nominal scale have something in common. For example, one might compare a set of countries. It is allowed to code the set of countries with numbers, but the order is arbitrary. Moreover, making any calculations such as the mean would be meaningless. It is imperative to understand that nominal data is used in variables where each observation or participant in the study must be put into one mutually exhaustive or exclusive category (Glenberg & Andrzejewski, 2007).
Ordinal
Ordinal data refers to quantities or observations that have a natural ordering. This may be used to indicate something or someone in a temporal position, or superiority. The order of observation or participants is often defined by giving numbers to them to showcase their relative position (Treiman, 2009). Sequential symbols or letters may be used as appropriate. For example, a nurse or doctor might ask a patient to tell the amount of pain he or she is feeling on a scale of 1to10. Anyone who scores above seven means he or she is experiencing more pain than another patient who scores 5, and this patient is feeling more pain than the one who scores 3. Note, that the difference between patients who score 7 and 5 may not be the same as compared to patients who score 5 and 3. Treiman (2009) argues that the difference between ordinal data and the other data is that they do not highlight the magnitude of the disparity between first, second, and third. It only indicates that first came before second and so on.
Interval data
Interval data is a measurement in which the difference between each value is equidistance or meaningful. A good example of interval data is the measurements of temperature in degrees, where temperature difference between 100^{0}C and 900^{0}C is the same as the difference between 90^{0}C and 80^{0}C (Veney, Kros & Rosenthal, 2009). The main difference between interval data and ratio data is that interval data do not have absolute zero point.
Ratio data
Ration data are values that have a fixed zero point. For this reason, values ratio data can be compared as multiple of other values. Variables such as a person’s weight or height are ratio variables. For example, a person can be twice as heavy as another person. Moreover, the difference between individuals aged 53 and 50 is the same as the difference between people aged 13 and 10. Therefore, ratio data can be divided and multiplied. Unlike the above measurements ratio data allows one to look at the ratio of the two values (Forsyth, D et. al., 2010). For example, 60^{0}C is not twice as much as 30^{0}C, but a weight of 8g is twice as much as a weight of 4g because weight is a ratio variable unlike temperature, which is an interval variable.
Type 1 and type 2 errors
The null hypothesis, which is denoted as H_{o,} is either rejected or accepted based on the value of the teststatistics. This test may land in the rejection or acceptance region. After a test statistics were conducted and found to be close to or below zero (insignificant), then the null hypothesis is accepted (Veney, Kros & Rosenthal, 2009). However, when the test statistics are significant or large, then the null hypothesis is accepted. In this acceptance and rejection plan, there are high chances of making an error, either Type1 or Type 2 error. According to Bunns & Grove (2009), an error is the difference between the unknown value of the population parameter and the sample statistic that has been used to estimate the population parameter.
Type 1 error
Denoted by an alpha (α), type one error occurs when a true null hypothesis is rejected. The null hypothesis is rejected when the test statistics falls in the rejection region. It can be said that type 1 error has occurred when
 A coach refuses to play a good player
 An innocent person is sent to prison
 An intelligent student is left in the same class instead of being promoted
Type 2 error
Type 2 error occurs when a null hypothesis is accepted when it is true. In this case, the researcher chooses to reject the alternative hypothesis and instead accepts the null hypothesis. In the nursing profession, the alternative hypothesis can be said to be the reason why people are treated. It is obvious that when a person goes to hospital, the doctors would not presume that the person is not sick or they would not conduct tests on him or her. It can be seen that the alternative hypothesis is the variant that needs to be evaluated (Bunns & Grove, 2009).
Correlation
Correlation denoted as r is a statistical theory that seeks to determine the degree of linear relationship or the type and the amount of relationship between two variables, typically labeled X and Y.
Positive relationship
Given two variables X and Y, a positive relationship occurs when a change in either variable is determined by the other variable. Considering the variables expenditure and income, it is obvious that expenditure and income decrease or increase together. This means that when income is low, the expenditure also reduces and vice versa. Therefore, the two variables are related in the sense that any change in one of the variable is determined by a change in the other variable (Zaccagnini &Waud W, 2011).
Negative relationship
A negative relationship indicates that as one variable decreases, the other variable increases and vice versa. Considering the variables study time and performance, it is apparent that the more time one spends studying, the higher the chances of success and vice versa.
Level of significance (pvalue)
The pvalue of a given result indicates that the observed relationships, for example, between variables or a difference between means in a sample happened by utter chance, and that no such difference or relationship exists in the population from which the samples were obtained (Forsyth, D et. al., 2010). The most common level of significance includes .05, .01, and .001. Results whose pvalue is 0.5 show that there is a 5 percent probability that the relationship of the provided sample from the entire population is a fluke. While a result whose pvalue≤0.5 or ≤0.001is considered highly significant, results that are significant at the pvalue of ≤0.1 are normally considered statistically significant.
Related Research essays
 The Omnivores Dilemma essay
 Genetic Case Study: Tay Sachs Disease essay
 Jean Watsons Theory of Human Caring essay
 Turning Concepts into Measures essay
 Allegory in Painting and Human Consciousness essay
 Impact of Body Worn Camera on Police Professionalism essay
 ISIS as a National Threat for the US essay
 A Research Proposal on Forgetting essay
 Impact of Social Media essay
 The Dangerous Side of Technology: Negative Effects of iPads essay
