Internationalization - Using non-English languages


SDA allows users to set up SDA datasets and to display results with the SDAWEB interface in practically any language. This document summarizes the issues involved.

This document includes the following topics:


If the language you're using is plain 'US-ASCII' (also known as simply 'ASCII'), then you don't have to worry about character encoding issues. However, if you are using a non-English language, then the browser needs to know the encoding you are using to display the characters properly.

If you are not using ASCII text, then the name of the character encoding used for a dataset must be specified using the 'CHARSET=' keyword in the general section of the DDL file. The name of this character set will then be stored as a permanent part of the SDA dataset (in the STUDYINF/studyinf file) when MAKESDA is executed.

For a list of recognized character sets, see the list of IANA Character Sets. Some commonly encountered encodings are: 'ISO-8859-1' (Latin alphabet no. 1) and the similar 'Windows-1252' (found in some older Windows files). Both were used in the past to encode various European languages. However, UTF-8 is today the preferred encoding for the Web because:

Therefore, today, UTF-8 should always be used as the character encoding for non-English languages.

Warning: do NOT use other Unicode encodings such as UTF-16 or UTF-32. Also, do NOT use so-called "character entities" for non-ASCII characters. (These are the HTML codes that start with a '&' and end with a ';'.)

If you have documentation in another encoding, there are various tools available to convert it to UTF-8. The Linux, Unix and Mac OS X operating systems all include the "iconv" utility program which converts text from one encoding to another. For example, the following command will convert an "originalfile" encoded in ISO-8859-1 to a "newfile" encoded in UTF-8.

iconv -f ISO-8859-1 -t UTF-8 originalfile > newfile
Various Windows editors also provide encoding conversion capabilities:

In Microsoft Word: open a file, select "Save as ...", then "Plain Text (*.txt)", then "Save". In the "File Conversion" dialog box click "Other encoding" and choose "Unicode (UTF-8)".

The popular freeware program Notepad++ also provides encoding conversion. Open a file, select the "Encoding" menu, then choose "Convert to UTF-8". (The built-in Microsoft Notepad will also save files as UTF-8, but it automatically inserts a BOM at the beginning of the file -- which is not ideal.)

When HTML pages are generated by various SDA programs, the charset information stored with the dataset will be taken into account so the pages can be displayed correctly in a browser. Usually the charset information will be used to write a meta tag in the head element of an HTML page. For example:

<meta http-equiv="Content-Type" content="text/html;charset=utf-8">


In addition to specifying a charset in the global section of the DDL file, you should also specify the dataset’s ’lang’ attribute. If you specify a lang attribute in the DDL file, it will also be written to the SDA dataset’s STUDYINF/studyinf file when MAKESDA is executed.

Here is an example of specifying a charset and a lang attribute in the global section of a DDL file:

title = French Canadian Study charset = utf-8 lang = fr-CA
When SDA programs write HTML, the dataset’s lang will be written as an attribute of the main HTML tag. For example:
<html lang="fr-CA">
A two-character language code like 'fr' represents the generic language. An optional subfield can be added, to indicate a regional dialect. (Even finer variations -- with longer "lang" codes -- are occasionally found.) In the example above "fr-CA" indicates that the language is French, as spoken in Canada. However, unless you have a compelling reason to distinguish between regional dialects of a given language you should always use the generic language code.

A complete list of the "lang" codes and their corresponding resource bundle file names can be found here.


The names, labels and question text for variables are all defined in the DDL file. The variable names must only contain ASCII characters, but the variable labels, category labels and question text may be in any language. However, as noted above, you must specify the character encoding using the 'CHARSET=' keyword and the language using the "LANG=" keyword in the general section of the DDL file.

After the DDL file has been used to create the SDA dataset (by using the SDAMANAGER or by using the MAKESDA program directly), all displays of SDA results will use that language.

Note that the raw data file must be encoded in plain US-ASCII. Any other encoding will not work.


The user interface for SDAWEB, analysis output and codebooks can be changed to another language by modifying some or all of the default English character strings with alternate wording. (You can even modify the English wording if desired.)

The strings to be modified are contained in a number of different files, corresponding to the displays generated by the main SDAWEB interface for selecting options and by the various analysis and codebook programs. The following sections describe how to obtain copies of the files with the default language strings, how to modify them, and where to put the modified files.


There are three separate language files that can be modified. Once you have a copy of the language files, you can proceed to modify them.


All of the language files have the same format. There is a keyword, then an equal sign, then the string used by the SDA programs.

Here are a few such strings used for the output from analysis programs:

ROWVAR = Row COLVAR = Column WGT = Weight FLT = Filter

Here are those same strings converted to Portuguese:

ROWVAR = Var. de linha
COLVAR = Var. de coluna
WGT = Peso
FLT = Var. de Seleção
The first three strings are simple to enter. The fourth one, however, includes characters that are not included in the set of simple 'US-ASCII' characters. Notice that the Portuguese words in the 'FLT' string include a 'c' with the cedilla and an 'a' with a tilde over it.

Although it is possible to enter these special characters using an English keyboard and special ALT-codes, it is probably easiest in most situations to invest in a language-appropriate keyboard so these special characters can be typed directly. (These keyboards can often be purchased for $30 or less.)

Also, if you use Microsoft Word (or similar word-processing software) be sure to save the file as a ".txt" file instead of a ".doc" or ".docx" file; ".doc" and ".docx" files contain hidden formatting that will interfere with the processing of the language files.

The analysis and codebook languages files should be saved as UTF-8 files. However, the language file for the user interface is a Java "resource bundle" file which must conform to the special requirements of these files. Java resource bundles must be encoded using ISO-8859-1 (Western European). Or, if the language cannot be encoded in ISO-8859-1, then Unicode escape codes (such as '\u62b5') must be used. Fortunately, the Java JDK provides a "native2ascii" tool which can be used to convert "any character encoding that is supported by the Java runtime environment to files encoded in ASCII, using Unicode escapes for all characters that are not part of the ASCII character set." See this Oracle documentation for more information on how to use the "native2ascii" tool. Note that although the input language file for the user interface is not UTF-8, the resulting output HTML for the user interface is UTF-8.

After you have modified the strings, you can proceed to put the modified files into the appropriate locations.


The location of the language files depends on the program for which it is designed.


Normally it is the browser that is responsible for selecting the correct font for displaying text, using whatever information is available in the HTML code (or response header from the server) as a guide. However, charts present a special problem because the chart inserted into the HTML output is just an image -- a picture -- and the browser has no control over selecting the font that is used in the chart’s headings, labels, etc. Instead, the font is selected when the chart image is created by the SDAWEB application on the server.

By default, the SDAWEB application uses the generic Java "SansSerif" font when displaying text. (This "SansSerif" font is mapped to a particular physical font on the server on a system-dependent basis.) In many instances this default font will work fine. However, there may be cases where a specific font is required to display a given language. This font setting is done in the SDA Manager under "Custom chart font" in the "Global Specifications" section. Remember that the font specified must actually be present on the server machine that’s running the Java JVM. And the server must be configured so that the font is available to Tomcat.


The SDA search utility currently works only with search terms entered in English or in a Western European language. Furthermore, the search utility is configured so that accented Latin characters (French accented vowels, German umlauts, etc.) will be displayed correctly; however, the search terms themselves can only be entered using the corresponding non-accented characters. For example, to search for all variables containing the French word 'Âge', you would enter the search term 'age'. (Recall that search terms are case-insensitive.)

Languages that are not compatible with the Latin character set at all -- Asian ideographs, Georgian script, etc. -- cannot be used in search terms (although they will still display correctly in search results). These language limitations in SDA searching will likely be removed in a future version of SDA. However, it is important to be aware of these issues if you have datasets that are not in English.


There are a couple of other technical issues concerning character encoding that should be kept in mind.


DDL Data Description Language

CSM, UC Berkeley
Sept. 6, 2016