SAT Data Tables
2015 percentile tables are not yet available. Please check on or around September 25, 2015.
Insight into SAT student data
SAT Program data provide educators with important information about:
- Test characteristics
Additionally, college-bound senior data provide important information and insights about the test-takers, including:
- Subgroup performance
- College plans and goals
- High school background
Below is a list of data tables for use in interpreting SAT scores. These tables supplement the College-Bound Seniors data.
These tables are available for download in PDF. Requires Adobe Reader (latest version recommended).
A list of statistical definitions below explains the statistical terms used in the tables.
|SAT Reasoning Test|
|SAT Percentile Ranks (.pdf/310K)|| |
These percentile tables allow comparison of students who took the test and are used to find out how students performed on the test.
|Percentile Ranks for Males, Females, and Total Group: Critical Reading (.pdf/339K)|
|Percentile Ranks for Males, Females, and Total Group: Mathematics (.pdf/334K)|
|Percentile Ranks for Males, Females, and Total Group: Writing (.pdf/645K)|
|Composite (CR+M+W) Percentile Ranks|
|Composite (CR+M) Percentile Ranks|
|Critical Reading, Mathematics, and Writing Percentile Ranks by Gender and Ethnic Groups|
|SAT Raw Score to Scaled Score Ranges* (.pdf/230K)|
|Test Characteristics (Reliability, Difficulty Levels, Completion Rates)|
|Test Characteristics of the SAT||This table shows that the test is reliable and appropriately difficult and that a sufficient amount of time is allocated for each test section.|
|SAT Concordance to ACT|
|Other SAT Data|
|Total Group Writing Subscore Report (.pdf/31K)||This table provides a better understanding of how the total group performed on the essay and multiple-choice questions of the writing section, as broken down by both total group and ethnic groups.|
|PSAT/NMSQT® Scores||These tables show changes in scores over time.|
|Percentage of Students with Senior Year Score Gain or Loss|
|SAT One-Year Mean Score Changes|
|Test Scores of Nontraditional Test-Takers (seventh- and eighth-graders and adults) (.pdf/89K)|
|Comparing Group Scores on the SAT (graph) (.pdf/153K)||Use this information to determine when a difference between two group mean scores is statistically significant—and when it is not.|
|Comparing Group Scores on the SAT (table) (.pdf/89K)|
- Completion Rates: Completion-rate data show how many students finished the test or section of a test. The percent of students completing the test is influenced by students who may reach the final questions but choose not to answer them because of their greater difficulty. Therefore, both the percent completing the test or section of a test and the percent completing three-fourths of the test or section are evaluated to determine if a test's time limits are appropriate. In general, a test's time limits are appropriate if virtually all of those taking it complete 75 percent of the questions and 80 percent reach the final question.
- Conversion Tables: After each new version of the SAT is equated (see Equating), a table is used to convert the raw scores on a particular version of the test to the 200-to-800 College Board scale. Each conversion table is slightly different from any other conversion table because the difficulty levels of any two versions are never exactly the same. The raw scores are not reported.
- Correlation and Correlation Coefficient: Correlation refers to the extent that two variables are related. If high scores on one variable are related to high scores on the second variable, the relationship is positive. The correlation coefficient ranges from -1.00 (perfect negative relationship) to +1.00 (perfect positive relationship). A zero correlation coefficient indicates no relationship between the two variables. Most correlations of test scores and measures of academic success are positive. The correlation coefficients reported in the tables indicate how well the various predictors (e.g., test scores, HSGPA) relate to performance in college. The higher the correlation, the better the prediction.
- Difficulty: The difficulty index is the average percent correct. After the percent of students answering each item correctly is computed, the average percent correct is computed over all of the items in the tests. Some questions are hard (percent answering correctly is low); while other questions are easy (percent answering correctly is high). Whether an individual question is easy or hard, the difficulty of a test is appropriate for a group if the average percent correct is around .50.
- Equating: Each year several different forms of the SAT Program tests are administered to college-bound students. Detailed content and statistical specifications are used to assemble each new form of the tests. One goal of the test assembly process is to make all forms of a particular test equivalent in difficulty for test-takers at all levels of ability. In practice, it is not possible to produce test forms that are exactly equivalent in difficulty, and a statistical procedure, referred to as score equating, is used to ensure that scores on different forms of a test are comparable. Thus, the purpose of equating is to adjust scores for minor differences in test difficulty from form to form, so that a score represents the same level of ability regardless of the difficulty of a particular form. That is, equating is the statistical procedure used to produce comparable scores on different versions of SAT Program tests.
- Mean: The mean is the arithmetic average.
- Median: The median is the point on the score scale at which 50 percent of the students' scores are above the point and 50 percent are below.
- Percentile Rank: The rank is the percentage of students whose scores fall below a particular scaled score. Percentile ranks should not be compared across SAT Program tests, including different SAT Subject Tests, because the tests are taken by different groups of students.
- Raw Score: A raw score is the number of questions answered correctly minus a fraction of the incorrect answers. The raw scores are converted to scaled scores for reporting. One-quarter point is subtracted for incorrect responses to five-choice questions; one-third point for four-choice questions, and one-half point for three-choice questions. Nothing is subtracted for answering student produced response questions incorrectly.
- Recentering: In 1995 the SAT verbal and math means were set at 500 (with a standard deviation of 110), restoring the distribution of scores to the center of the College Board 200-to-800 scale. Recentered scores on the SAT Subject Tests (formerly SAT II: Subject Tests) have been linked to the new SAT scale. Because many of the SAT Subject Tests are taken by more able students than the general SAT population, Subject Tests means tend to be higher than 500.
- Reliability: Reliability is the extent to which a test measures consistently. For scaled scores, a reliability coefficient of 1.00 indicates a test that is perfectly reliable. The SAT Program tests are highly consistent with reliability coefficients that are approximately .90.
- Restriction of Range: Because students choose colleges and colleges choose students, the range of high school grade point averages and admission test scores is narrower than the range found in the potential applicant pool. This restriction of range decreases the strength of the relationship (expressed as a correlation coefficient) between high school and college GPAs and between test scores and college GPA. The correlation coefficient is adjusted in these tables using the Pearson–Lawley multivariate correction to represent the correlation more accurately.)
- Standard Deviation (SD): The standard deviation is a measure of the variability of a set of scores around their mean. If test scores cluster tightly around the mean score, as they do when the group tested is relatively homogeneous, the SD is smaller than it would be with a more diverse group and a higher deviation from the mean.
- Standard Error of the Difference (SED): The SED is a tool for assessing how much two test scores must differ before they indicate ability differences. To be confident that two scores indicate a true difference in ability, the scores must differ by at least the SED times 1.5. For example, SAT verbal and math scores must differ by 60 points (40 × 1.5) in order to indicate true differences of ability.
- Standard Error of Measurement (SEM): The SEM is an index of the extent to which students' obtained scores tend to vary from their true scores. It is expressed in score units of the test. Intervals extending one standard error above and below the true score (see below) for a test-taker will include 68 percent of that test-taker's obtained scores. Similarly, intervals extending two standard errors above and below the true score will include 95 percent of the test-taker's obtained scores.
- True Score (See Standard Error of Measurement): True score is a hypothetical concept indicating what an individual's score on a test would be if there were no error introduced by the measuring process. It is thought of as the hypothetical average of an infinite number of obtained scores for a test-taker with the effect of practice removed.
- Validity: A test is considered valid if it meets its intended purpose. Typical measures of validity are the correlation between test scores and grade point average and the correlation between test scores and course grades.