The data on contract rent (also referred to as "rent asked" for vacant units) were obtained from Housing Question 18 in the 2006 American Community Survey. The question was asked at occupied housing units that were for rent, vacant housing units that were for rent, and vacant units rented but not occupied at the time of interview.
Housing units that are renter occupied without payment of cash rent are shown separately as "No cash rent." The unit may be owned by friends or relatives who live elsewhere and who allow occupancy without charge. Rent-free houses or apartments may be provided to compensate caretakers, ministers, tenant farmers, sharecroppers, or others.
Contract rent is the monthly rent agreed to or contracted for, regardless of any furnishings, utilities, fees, meals, or services that may be included. For vacant units, it is the monthly rent asked for the rental unit at the time of interview.
If the contract rent includes rent for a business unit or for living quarters occupied by another household, only that part of the rent estimated to be for the respondent's unit was included. Excluded was any rent paid for additional units or for business premises.
If a renter pays rent to the owner of a condominium or cooperative, and the condominium fee or cooperative carrying charge also is paid by the renter to the owner, the condominium fee or carrying charge was included as rent.
If a renter receives payments from lodgers or roomers who are listed as members of the household, the rent without deduction for any payments received from the lodgers or roomers, was to be reported. The respondent was to report the rent agreed to or contracted for even if paid by someone else such as friends or relatives living elsewhere, a church or welfare agency, or the government through subsidies or vouchers.
Median and Quartile Contract Rent
The median divides the rent distribution into two equal parts: one-half of the cases falling below the median contract rent and one-half above the median. Quartiles divide the rent distribution into four equal parts. Median and quartile contract rent are computed on the basis of a standard distribution. (See the "Standard Distributions" section under "Derived Measures.") In computing median and quartile contract rent, units reported as "No cash paid" are excluded. Median and quartile rent calculations are rounded to the nearest whole dollar. Upper and lower quartiles can be used to note large rent differences among various geographic areas. (For more information on medians and quartiles, see "Derived Measures.")
Aggregate contract rent is calculated by adding all of the contract rents for occupied housing units in an area. Aggregate contract rent is subject to rounding, which means that all cells in a matrix are rounded to the nearest hundred dollars. (For more information, see "Aggregate" under "Derived Measures.")
Aggregate rent asked is calculated by adding all of the rents for vacant-for-rent housing units in an area. Aggregate rent asked is subject to rounding, which means that all cells in a matrix are rounded to the nearest hundred dollars. (For more information, see "Aggregate" under "Derived Measures.")
Since 1996, the American Community Survey questionnaires provided a space for the respondent to enter a dollar amount. The words "or mobile home" were added to the question starting in 1999 to be more inclusive of the structure type. Since 2004, contract rent has been shown for all renter-occupied housing units. In previous years (1996-2003), it was shown only for specified renter-occupied housing units.
Telephone and Personal Visit Follow-Up
The call centers and regional offices that conduct the computer-assisted telephone interviewing (CATI) and computer-assisted personal interviewing (CAPI) nonresponse follow-up operations make every effort to hire bilingual staff. In addition, CAPI interviewers are instructed to search for interpreters within the sample household, or from the neighborhood, to assist in data collection. The regional offices maintain a list of interpreters who are skilled in many languages and are available to assist the CAPI interviewer in the respondents preferred language. Interviewers use a flashcard to identify the specific language spoken when they cannot communicate with a particular household. CAPI interviewers can also provide respondents that speak Spanish, Chinese, Russian, Korean, or Vietnamese translated versions of some informational materials. These materials include an introductory letter and two brochures that explain the survey, as well as a letter that thanks the respondent for his or her participation. Future plans include expanding the number of languages that these CAPI informational materials are available in, and increasing the number of materials that are translated.
The ACS CATI and CAPI survey instruments currently are available in both English and Spanish. Interviewers can conduct interviews in additional languages if they have that capability. Because a translated instrument is not available in languages other than English and Spanish, interviewers translate the English version during the interview and record the results on the English instrument. The Census Bureau is exploring the possibility of creating translated instruments or guides for interviewer use in languages other than English and Spanish. Also, there are special procedures and an interviewer training module that deal with the collection of data from respondents who do not speak English. All ACS interviewers are given this training as part of their classroom interviewer training. The training is designed to improve the consistency of these procedures and to remind interviewers of the importance of collecting complete data for all households. The CATI and CAPI instruments collect important data on language-related issues, including the frequency of the use of interpreters and of the Spanish instrument, which allows the Census Bureau to monitor how data are being collected. The instruments also record how often interviewers conduct translations of their own into different languages. For example, Griffin (2006b) found that in 2005, more than 86 percent of all CAPI interviews with Spanish-speaking households were conducted by a bilingual (Spanish/English) interviewer. She also found that about 8 percent of the interviews conducted with Chinese-speaking households required the assistance of an interpreter who was not a member of the household.
Additional data collected allow the call centers and the regional offices to identify CATI and CAPI cases that were not completed due to language barriers. A profile of this information by language highlights those languages needing greater support. Griffin (2006b) found that, out of 31,489 cases in the 2005 CATI workload that were identified as requiring a language other than English, 9.3 percent could not be interviewed due to a language barrier. The greatest language needs were for Spanish, Vietnamese, Korean, and Chinese. Call center managers used this information to identify specific language recruiting needs and hire additional staff with these skills. Similar information was used to improve CAPI.
Griffin and McGovern (2004) compared the language abilities of CAPI interviewers in each regional office with the needs of the population for that area. This assessment was based on 2003 ACS language data and regional office staffing information. The regional offices used these data to assist in recruiting support in anticipation of the full sample expansion in 2005. A planned update of this assessment for both CATI and CAPI will look at current staffing.
Beginning in 2006, the population in group quarters (GQ) is included in the ACS. Many types of GQ populations have food stamp distributions that are very different from the household population. The inclusion of the GQ population could therefore have a noticeable impact on the food stamp distribution. This is particularly true for areas with a substantial GQ population.
"
The 1996-1998 American Community Survey asked for a 12-month amount for the value of the food stamps following the Yes response category. For the 1999-2002 ACS, the words "Food Stamps" were capitalized in the question following the Yes response category, and the instruction "Past 12 months' value - Dollars" was added. Since 2003, the words "received during the past 12 months" were added to the question following the Yes response category.
Carrasco, Lorena. (2003). " The American Community Survey en Espanol: Using Cognitive Interviews to Test the Functional Equivalency of Questionnaire Translations." Statistical Research Division Study Series Report. Washington, DC: U.S. Census Bureau, 2003.
Griffin, Deborah. (2006b). " Requests for Alternative Language questionnaires." American Community Survey Discussion Paper. Washington, DC: U.S. Census Bureau, 2006.
Griffin, Deborah, and Joan Broadwater. (2005). " American Community Survey Noninterview Rates Due to Language Barriers." Paper presented at the Meetings of the Census Advisory Committee on the African-American Population, the American Indian and Alaska Native Populations, the Asian Population, the Hispanic Population, and the Native Hawaiian and Other Pacific Islander Populations on April 25-27, 2005.
Griffin, Deborah, and Pamela McGovern. (2003). " Language Action Plan for the American Community Survey." Washington, DC: U.S. Census Bureau, 2003.
McGovern, Pamela, Deborah Griffin, and Larry McGinn. (2003). " Language Action Plan for the American Community Survey." Meetings of the Census Advisory Committee on the African- American Population, the American Indian and Alaska Native Populations, the Asian Population, the Hispanic Population, and the Native Hawaiian and Other Pacific Islander Populations, May 5-7, 2003.
McGovern, Pamela D. (2004). " A Quality Assessment of Data Collected in the American Community Survey for Households With Low English Proficiency." Washington, DC: U.S. Census Bureau, 2004.
U.S. Census Bureau. (2004a). " Census Bureau Guideline: Language Translation of Data Collection Instruments and Supporting Materials." Internal U.S. Census Bureau document, Washington, DC, 2004.
U.S. Census Bureau. (2004b). " Housing and Population Edit Specifications." Internal U.S. Census Bureau documentation, Washington, DC.
Adjusting Gross Rent for Inflation
To inflate gross rent amounts from previous years, the dollar values are inflated to the latest years dollar values by multiplying by a factor equal to the average annual Consumer Price Index (CPI-U-RS) factor for the current year, divided by the average annual CPI-U-RS factor for the earlier/earliest year.
Data preparation and processing are critical steps in the survey process, particularly in terms of improving data quality. It is typical for developers of a large ongoing survey, such as the American Community Survey (ACS) to develop stringent procedures and rules to guide these processes and ensure that they are done in a consistent and accurate manner. This chapter discusses the actions taken during ACS data preparation and processing, provides the reader with an understanding of the various stages involved in readying the data for dissemination, and describes the steps taken to produce high-quality data.
The main purpose of data preparation and processing is to take the response data gathered from each survey collection mode to the point where they can be used to produce survey estimates. Data returning from the field typically arrive in various stages of completion, from a completed interview with no problems to one with most or all of the data items left blank. There can be inconsistencies within the interviews, such that one response contradicts another, or duplicate interviews may be returned from the same household but contain different answers to the same question.
Upon arrival at the U.S. Census Bureau, all data undergo data preparation, where responses from different modes are captured in electronic form creating Data Capture Files. The write-in entries from the Data Capture Files are then subject to monthly coding operations. When the monthly Data Capture Files are accumulated at year-end, a series of steps are taken to produce Edit Input Files. These are created by merging operational status information (such as whether the unit is vacant, occupied, or nonexistent) for each housing unit (HU) and group quarters (GQ) facility with the files that include the response data. These combined data then undergo a number of processing steps before they are ready to be tabulated for use in data products.
Figure 10.1
American Community Survey (ACS) Data Preparation and Processing
Figure 10.1 depicts the overall flow of data as they pass from data collection operations through data preparation and processing and into data products development. While there are no set definitions of data preparation versus data processing, all activities leading to the creation of the Edit Input Files are considered data preparation activities, while those that follow are considered data processing activities.
The ACS control file is integral to data preparation and processing because it provides a single database for all units in the sample. The control file includes detailed information documenting operational outcomes for every ACS sample case. For the mail operations, it documents the receipt and check-in date of questionnaires returned by mail. The status of data capture for these questionnaires and the results of the Failed-Edit Follow-up (FEFU) operation also are recorded in this file. Chapter 7 provides a detailed discussion of mail data collection, as well as computer assisted telephone interview (CATI) and computer-assisted personal interview (CAPI) operations.
For CAPI operations, the ACS control file stores information on whether or not a unit was determined to be occupied or vacant. Data preparation, which joins together each cases control file information with the raw, unedited response data, involves three operations: creation and processing of data capture files, coding, and creation of edit input files.
Creation and Preparation of Data Capture Files
Many processing procedures are necessary to prepare the ACS data for tabulation. In this section, we examine each data preparation procedure separately. These procedures occur daily or monthly, depending on the file type (control or data capture) and the data collection mode (mail, CATI, or CAPI). The processing that produces the final input files for data products is conducted on a yearly basis.
The HU data are collected on a continual basis throughout the year by mail, CATI, and CAPI. Sampled households first are mailed the ACS questionnaire; those households for which a phone number is available that do not respond by mail receive telephone follow-up. As discussed in Chapter 7, a sample of the noncompleted CATI cases is sent to the field for in-person CAPI interviews, together with a sample of cases that could not be mailed. Each day, the status of each sample case is updated in the ACS control file based on data from data collection and capture operations. While the control file does not record response data, it does indicate when cases are completed so as to avoid additional attempts being made for completion in another mode.
The creation and processing of the data depends on the mode of data collection. Figure 10.2 shows the monthly processing of HU response data. Data from questionnaires received by mail are processed daily and are added to a Data Capture File (DCF) on a monthly basis. Data received by mail are run through a computerized process that checks for sufficient responses and for large households that require follow-up. Cases failing the process are sent to the FEFU operation. As discussed in more detail in Chapter 7, the mail version of the ACS asks for detailed information on up to five household members. If there are more than five members in the household, the FEFU process also will ask questions about those additional household members. Telephone interviewers call the cases with missing or inconsistent data for corrections or additional information. The FEFU data are also included in the data capture file as mail responses. The Telephone Questionnaire Assistance (TQA) operation uses the CATI instrument to collect data. These data are also treated as mail responses, as shown in Figure 10.2.
Figure 10.2
Daily Processing of Housing Unit Data
CATI follow-up is conducted at three telephone call centers. Data collected through telephone interviews are entered into a BLAISE instrument. Operational data are transmitted to the Census Bureau headquarters daily to update the control file with the current status of each case. For data collected via the CAPI mode, Census Bureau field representatives (FR's) enter the ACS data directly into a laptop during a personal visit to the sample address. The FR transmits completed cases from the laptop to headquarters using an encrypted Internet connection. The control file also is updated with the current status of the case. Each day, status information for GQs is transmitted to headquarters for use in updating the control file. The GQ data are collected on paper forms that are sent to the National Processing Center on a flow basis for data capture.
Median Gross Rent as a Percentage of Household Income
This measure divides the gross rent as a percentage of household income distribution into two equal parts: one-half of the cases falling below the median gross rent as a percentage of household income and one-half above the median. Median gross rent as a percentage of household income is computed on the basis of a standard distribution. (See the "Standard Distributions" section under "Derived Measures.") Median gross rent as a percentage of household income is rounded to the nearest tenth. (For more information on medians, see "Derived Measures.")
The ACS questionnaire includes a set of questions that offer the possibility of write-in responses, each of which requires coding to make it machine-readable. Part of the preparation of newly received data for entry into the DCF involves identifying these write-in responses and placing them in a series of files that serve as input to the coding operations. The DCF monthly files include HU and GQ data files, as well as a separate file for each write-in entry. The HU and GQ write-ins are stored together. Figure 10.4 diagrams the general ACS coding process.
Figure 10.4
American Community Survey Coding
During the coding phase for write-in responses, fields with write-in values are translated into a prescribed list of valid codes. The write-ins are organized into three types of coding: backcoding, industry and occupation coding, and geocoding. All three types of ACS coding are automated (i.e., use a series of computer programs to assign codes), clerically coded (coded by hand), or some combination of the two. The items that are sent to coding, along with the type and method of coding, are illustrated below in Table 10.1.
Table 10.1
ACS Coding Items, Types, and Methods
Item |
Type of coding |
Method of coding |
Race |
Backcoding |
Automated with clerical follow-up |
Hispanic origin |
Backcoding |
Automated with clerical follow-up |
Ancestry |
Backcoding |
Automated with clerical follow-up |
Language |
Backcoding |
Automated with clerical follow-up |
Industry |
Industry |
Clerical |
Occupation |
Occupation |
Clerical |
Place of birth |
Geocoding |
Automated with clerical follow-up |
Migration |
Geocoding |
Automated with clerical follow-up |
Place of work |
Geocoding |
Automated with clerical follow-up |
For the 1996-1998 American Community Survey, the question, which was asked of persons 5 years old and over, instructed the respondents to mark each appropriate box if they had difficulty with any of the following three specific functions: "Difficulty seeing (even with glasses)," "Difficulty hearing (even with a hearing aid)," or "Difficulty walking." The respondents could mark as many as three boxes depending on their functional limitation status. If the respondents did not have difficulty with any of the three specific functions, the question instructed them to mark the box labeled "None of the above." The sensory and physical disability data obtained from the 1996-1998 American Community Survey are not comparable to data collected from the 1999-2006 American Community Surveys.
The first type of coding is the one involving the most items-backcoding. Backcoded items are those that allow for respondents to write in some response other than the categories listed. Although respondents are instructed to mark one or more of the 12 given race categories on the ACS form, they also are given the option to check "Some Other Race," and to provide write-in responses. For example, respondents are instructed that if they answer "American Indian or Alaska Native," they should print the name of their enrolled or principal tribe; this allows for a more specific race response. Figure 10.5 illustrates backcoding.
All backcoded items go through an automated process for the first pass of coding. The written-in responses are keyed into digital data and then matched to a data dictionary. The data dictionary contains a list of the most common responses, with a code attached to each. The coding program attempts to match the keyed response to an entry in the dictionary to assign a code. For example, the question of language spoken in the home is automatically coded to one of 380 language categories. These categories were developed from a master code list of 55,000 language names and variations. If the respondent lists more than one non-English language, only the first language is coded.
However, not all cases can be assigned a code using the automated coding program. Responses with misspellings, alternate spellings, or entries that do not match the data dictionary must be sent to clerical coding. Trained human coders will look at each case and assign a code. One example of a combination of autocoding and follow-up clerical coding is the ancestry item. The write-in string for ancestry is matched against a census file containing all of the responses ever given that have been associated with codes. If there is no match, an item is coded manually. The clerical coder looks at the partial code assigned by the automatic coding program and attempts to assign a full code.
To ensure that coding is accurate, 10 percent of the backcoded items are sent through the quality assurance (QA) process. Batches of 1,000 randomly selected cases are sent to two QA coders who independently assign codes. If the codes they assign do not match one another, or the codes assigned by the automated coding program or clerical coder do not match, the case is sent to adjudication. Adjudicator coders are coding supervisors with additional training and resources. The adjudicating coder decides the proper code, and the case is considered complete.
Figure 10.5
Backcoding
This category includes gas piped through underground pipes from a central system to serve the neighborhood.
This category includes liquid propane gas stored in bottles or tanks that are refilled or exchanged when empty.
Electricity is generally supplied by means of above or underground electric power lines.
Response Type and Number of People in the HU
Each HU is assigned a response type that describes its status as occupied, temporarily occupied, vacant, a delete, or noninterview. Deleted HUs are units that are determined to be nonexistent, demolished, or commercial units, i.e., out of scope for ACS.
While this type of classification already exists in the DCF, it can be changed from "occupied" to "vacant" or even to "noninterview" under certain circumstances, depending on the final number of persons in the HU, in combination with other variables. In general, if the return indicates that the HU is not occupied and that there are no people listed with data, the record and number of people (which equals 0) is left as is. If the HU is listed as occupied, but the number of persons for whom data are reported is 0, it is considered vacant.
The data also are examined to determine the total number of people living in the HU, which is not always a straightforward process. For example, on a mail return, the count of people on the cover of the form sometimes may not match the number of people reported inside. Another inconsistency would be when more than five members are listed for the HU, and the FEFU fails to get information for any additional members beyond the fifth. In this case, there will be a difference between the number of person records and the number of people listed in the HU. To reconcile the numbers, several steps are taken, but in general, the largest number listed is used. (For more details on the process, see Powers [2006].)
Determining if a Return Is Acceptable
The acceptability index is a data quality measure used to determine if the data collected from an occupied HU or a GQ are complete enough to include a person record. Figure 10.13 illustrates the acceptability index. Six basic demographic questions plus marital status are examined for answers. One point is given for each question answered for a total of seven possible points that could be assigned to each person in the household. A person with a response to either age or date of birth scores two points because given one, the other can be derived or assigned. The total number of points is then divided by the total number of household members. For the interview to be accepted, there must be an average of 2.5 responses per person in the household. Household records that do not meet this acceptability index are classified as noninterviews and will not be included in further data processing. These cases will be accounted for in the weighting process, as outlined in Chapter 11.
Figure 10.13
Acceptability Index
Unduplicating Multiple Returns
Once the universe of acceptable interviews is determined, the HU data are reviewed to unduplicate multiple returns for a single HU. There are several reasons why more than one response can exist for an HU. A household might return two mail forms, one in response to the initial mailing and a second in response to the replacement mailing. A household might return a mailed form, but also be interviewed in CATI or CAPI before the mail form is logged in as returned. If more than one return exists for an HU, a quality index is used to select one as the final return. This index is calculated as the percentage of items with responses out of the total number of items that should have been completed. The index considers responses to both population and housing items. The mode of each return also is considered in the decision regarding which of two returns to accept, with preference generally given to mail returns. If two mail returns are received, preference generally is given to the earliest return. For the more complete set of rules, see Powers (2006).
After the resolution of multiple returns, each sample case is assigned a value for three critical variables-data collection mode, month of interview, and case status. The month in which data were collected from each sample case is determined and then used to define the universe of cases to be used in the production of survey estimates. For example, data collected in January 2007 were included in the 2007 ACS data products, even if the returns were sampled in 2006, while ACS surveys sent out in November 2007 were included in the 2007 ACS data products if they were received by mail or otherwise completed by December 31, 2007. Surveys sent out in November 2007 that were received by mail or otherwise completed after December 31, 2007, will be included in the 2008 ACS data products.
This category includes heat provided by sunlight that is collected, stored, and actively distributed to most of the rooms.
Select Files are the series of files that pertain to those cases that will be included in the Edit Input File. As noted above, these files include the case status, the interview month, and the data collection mode for all cases. The largest select file, also called the Omnibus Select File, contains every available case from 14 months of sample-the current (selected) year and November and December of the previous year. This file includes acceptable and unacceptable returns. Unacceptable returns include initial sample cases that were subsampled out at the CAPI stage,2 returns that were too incomplete to meet the acceptability requirements. In addition, while the "current year" includes all cases sampled in that year, not all returns from the sampled year were completed in that year. This file is then reduced to include only occupied housing units and vacant units that are to be tabulated in the current year. That is, returns that were tabulated in the prior year, or will be tabulated in the next year, are excluded. The final screening removes returns from vacant boats because they are not included in the ACS estimation universe.
Footnote:
2See Chapter 7 for a full discussion of subsampling and the ACS.
The next step is the creation of the Housing Edit Input File and the Person Edit Input File. The Housing Edit Input file is created by first merging the Final Accepted Select File with the DCF housing data. Date variables then are modified into the proper format. Next, variables are given the prefix "U," followed by the variable name to indicate they are unedited variables. Finally, answers that are "Don't Know" and "Refuse" are set as missing blank values for the edit process.
The Person Edit Input File is created by first merging the DCF person data with the codes for Hispanic origin, race, ancestry, language, place of work, and current or most recent job activity. This file then is merged with the Final Accepted Select File to create a file with all person information for all accepted HUs. As was done for the housing items, the person items are set with a "U" in front of the variable name to indicate that they are unedited variables. Next, various name flags are set to identify people with Spanish surnames and those with "non-name" first names, such as "female" or "boy." When the adjudicated number of people in an HU is greater than the number of person records, blank person records are created for them. The data for these records will be filled in during the imputation process. Finally, as with the housing variables, "Don't Know" and "Refuse" answers are set as missing blank values for the edit process. When complete, the Edit Input Files encompass the information from the DCF housing and person files but only for the unduplicated response records with data collected during the calendar year.
Since 1996, the American Community Survey questions have remained the same.
Average Household Size of Occupied Unit
A measure obtained by dividing the number of people living in occupied housing units by the total number of occupied housing units. This measure is rounded to the nearest hundredth.
Average Household Size of Owner-occupied Unit
A measure obtained by dividing the number of people living in owner-occupied housing units by the total number of owner-occupied housing units. This measure is rounded to the nearest hundredth.
Average Household Size of Renter-occupied Unit
A measure obtained by dividing the number of people living in renter-occupied housing units by the total number of renter-occupied housing units. This measure is rounded to the nearest hundredth.
The Census Bureau does not recommend trend analysis using the 2003-2006 data with years prior to 2003 due to the 2003 questionnaire change. For more information regarding the 2003 questionnaire change, view "Disability Data From the American Community Survey: A Brief Examination of the Effects of a Question Redesign in 2003" (
http://www.census.gov/hhes/www/disability/ACS_disability.pdf).
For the 1996-1998 American Community Survey, the data on going-outside-home limitations were derived from answers to Question 16a, which was asked of persons 16 years old and over. The question was slightly different from the 1999-2002 question and asked the respondents if they had a long-lasting physical or mental condition that made it difficult to "go outside the home alone to shop or visit a doctor's office." In the 1999-2002 American Community Survey, the going-outside-home question was part of Question 16. The 2003 questionnaire moved go-outside-home limitations to Question 17a and introduced a new skip instruction between Questions 16 and 17.
The review process involves both review of the editing process and a reasonableness review. After editing and imputation are complete, Census Bureau subject matter analysts review the resulting data files. The files contain both unedited and edited data, together with the accompanying imputation flag variables that indicate which missing, inconsistent, or incomplete items have been filled by imputation methods. Subject matter analysts first compare the unedited and edited data to see that the edit process worked as intended. The subject analysts also undertake their own analyses, looking for problems or inconsistencies in the data from their perspectives. When conducting the initial edit review, they determine whether the results make sense through a process known as a reasonableness review. If year-to-year changes do not appear to be reasonable, they institute a more comprehensive review to reexamine and resolve the issues. Allocation rates from the current year are compared with previous years to check for notable differences. A reasonableness review is done by topic, and results on unweighted data are compared across years to see if there are substantial differences. The initial reasonableness review takes place with national data, and another final review compares data from smaller geographic areas, such as counties and states (Jiles, 2007).
These processes also are carried out after weighting and swapping data (discussed in Chapter 12). Analysts also examine unusual individual cases that were changed during editing to ensure accuracy and reasonableness.
The analysts also use a number of special reports for comparisons based on the edit outputs and multiple years of survey data. These reports and data are used to help isolate problems in specifications or processing. They include detailed information on imputation rates for all data items, as well as tallies representing counts of the number of times certain programmed logic checks were executed during editing. If editing problems are discovered in the data during this review process, it is often necessary to rerun the programs and repeat the review.
Creating Input Files for Data Products
Once the subject matter analysts have approved data within the edited files, and their associated recodes, the files are ready to serve as inputs to the data products processing operation. If errors attributable to editing problems are detected during the creation of data products, it may be necessary to repeat the editing and review processes.
Median Fire, Hazard, and Flood Insurance
Median fire, hazard, and flood insurance divides the fire, hazard, and flood insurance distribution into two equal parts: one-half of the cases falling below the median fire, hazard, and flood insurance and one-half above the median. Median fire, hazard, and flood insurance is computed on the basis of a standard distribution (see the "Standard Distributions" section under "Derived Measures.") Median fire, hazard, and flood insurance is rounded to the nearest whole dollar. (For more information on medians, see "Derived Measures.")
The American Community Survey questions have been the same since 1996.