Talithia Williams
Author
Series
Great Courses volume 11
Description
Start with a hypothesized parameter for a population and determining whether we think a given sample could have come from that population. Practice this important technique, called hypothesis testing, with a single parameter, such as whether a lifestyle change reduces cholesterol. Discover the power of the p-value in gauging the significance of your result.
Author
Series
Great Courses volume 16
Description
Delve into ANOVA, short for analysis of variance, which is used for comparing three or more group means for statistical significance. ANOVA answers three questions: Do categories have an effect? How is the effect different across categories? Is this significant? Learn to apply the F-test and Tukey's honest significant difference (HSD) test.
Author
Series
Great Courses volume 4
Description
There's more than one way to be truly random! Delve deeper into probability by surveying several discrete probability distributions - those defined by discrete variables. Examples include Bernoulli, binomial, geometric, negative binomial, and Poisson distributions - each tailored to answer a specific question. Get your feet wet by analyzing several sets of data using these tools.
Author
Series
Great Courses volume 10
Description
Move beyond point estimates to consider the confidence interval, which provides a range of possible values. See how this tool gives an accurate estimate for a large population by sampling a relatively small subset of individuals. Then learn about the choice of confidence level, which is often specified as 95%. Investigate what happens when you adjust the confidence level up or down.
Author
Series
Great Courses volume 5
Description
Focus on the normal distribution, which is the most celebrated type of continuous probability distribution. Characterized by a bell-shaped curve that is symmetrical around the mean, the normal distribution shows up in a wide range of phenomena. Use R to find percentiles, probabilities, and other properties connected with this ubiquitous data pattern.
Author
Series
Great Courses volume 22
Description
Time series analysis provides a way to model response data that is correlated with itself, from one point in time to the next, such as daily stock prices or weather history. After disentangling seasonal changes from longer-term patterns, consider methods that can model a dependency on time, collectively known as ARIMA (autoregressive integrated moving average) models.
Author
Series
Great Courses volume 12
Description
Extend the method of hypothesis testing to see whether data from two different samples could have come from the same population - for example, chickens on different feed types or an ice skater's speed in two contrasting maneuvers. Using R, learn how to choose the right tool to differentiate between independent and dependent samples. One such tool is the matched pairs t-test.
Author
Series
Great Courses volume 18
Description
While a creative statistical analysis can sometime salvage a poorly designed experiment, gain an understanding of how experiments can be designed in from the outset to collect far more reliable statistical data. Consider the role of randomization, replication, blocking, and other criteria, along with the use of ANOVA to analyze the results. Work several examples in R.
Author
Series
Great Courses volume 6
Description
When are two variables correlated? Learn how to measure covariance, which is the association between two random variables. Then use covariance to obtain a dimensionless number called the correlation coefficient. Using an R data set, plot correlation values for several variables, including the physical measurements of a sample population.
Author
Series
Great Courses volume 8
Description
It's rarely possible to collect all the data from a population. Learn how to get a lot from a little by "bootstrapping," a technique that lets you improve an estimate by resampling the same data set over and over. It sounds like magic, but it works! Test tools such as the Q-Q plot and the Shapiro-Wilk test, and learn how to apply the central limit theorem.
Author
Series
Great Courses volume 15
Description
Multiple linear regression lets you deal with data that has multiple predictors. Begin with an R data set on diabetes in Pima Indian women that has an array of potential predictors. Evaluate these predictors for significance. Then turn to data where you fit a multiple regression model by adding explanatory variables one by one.
Author
Series
Great Courses volume 23
Description
Turn to an entirely different approach for doing statistical inference: Bayesian statistics, which assumes a known prior probability and updates the probability based on the accumulation of additional data. Unlike the frequentist approach, the Bayesian method does not depend on an infinite number of hypothetical repetitions. Explore the flexibility of Bayesian analysis.
Author
Series
Great Courses volume 13
Description
Step into fully modeling the relationship between data with the most common technique for this purpose: linear regression. Using R and data on the growth of wheat under differing amounts of rainfall, test different models against criteria for determining their validity. Cover common pitfalls when fitting a linear model to data.
Author
Series
Great Courses volume 21
Description
Spatial analysis is a set of statistical tools used to find additional order and patterns in spatial phenomena. Drawing on libraries for spatial analysis in R, use a type of graph called a semivariogram to plot the spatial autocorrelation of the measured sample points. Try your hand at data sets involving the geographic incidence of various medical conditions.
Author
Series
Great Courses volume 1
Description
Confront how ALL data has uncertainty, and why statistics is a powerful tool for reaching insights and solving problems. Begin by describing and summarizing data with the help of concepts such as the mean, median, variance, and standard deviation. Learn common statistical notation and graphing techniques, and get a preview of the programming language R, which will be used throughout the course.
Author
Series
Great Courses volume 9
Description
Take your understanding of descriptive techniques to the next level, as you begin your study of statistical inference, learning how to extract information from sample data. Focus on the point estimate - a single number that provides a sensible value for a given parameter. Consider how to obtain an unbiased estimator, and discover how to calculate the standard error for this estimate.
Author
Series
Great Courses volume 24
Description
Close the course by learning how to write custom functions for your R programs, streamlining operations, enhancing graphics, and putting R to work in a host of other ways. Professor Williams also supplies tips on downloading and exporting data, and making use of the rich resources for R - a truly powerful tool for understanding and interpreting data in whatever way you see fit.
Author
Series
Great Courses volume 3
Description
Study sampling and probability. See how sampling aims for genuine randomness in the gathering of data, and probability provides the tools for calculating the likelihood of a given event based on that data. Solve a range of problems in probability, including a case of medical diagnosis that involves the application of Bayes' theorem.
Author
Series
Great Courses volume 20
Description
Polynomial regression is a form of regression analysis in which the relationship between the independent and dependent variables is modelled as the power of a polynomial. Step functions fit smaller, local models instead of one global model. Or, if we have binary data, there is logistic regression, in which the response variable has categorical values such as true/false or 0/1.
Author
Series
Great Courses volume 19
Description
Delve into decision trees, which are graphs that use a branching method to determine all possible outcomes of a decision. Trees for continuous outcomes are called regression trees, while those for categorical outcomes are called classification trees. Learn how and when to use each, producing inferences that are easily understood by non-statisticians.