Contents

     Program Matrixer
     Quick start (Tips for beginners)
 
     Matrices
     Variables
     Scalars
     Strings
     Models
 
  Interface:
             Menu of matrices (variables)
             Table editor for matrix
             Command window
 
     Commands

     Graphs
 
     Assignment commands
     Formulas
        including:
            Substitutions
            Dynamic functions
   Scalar expressions
   Matrix expressions
   Functions
 
     Econometric models estimation
        including:
           Estimation of linear regressions
           Estimation of non-linear regressions 
           Logit and probit
 
     Statistical procedures
        including:
           Descriptive statistics
           Correlation matrix
           Autocorrelation function
 
     Estimation results
        including:
           Estimates and statistics
           Variables deletion test
           Influential observations 
           Second order effects
           Histogram of standardized  residuals
           Restrictions (functions of parameters) 
 
     Macros (groups of commands)
        including:
           Menu of macros
           Files of macros (command files) 
           Macro editor
           Control commands in macros 
           Messages and signals in macros

Quick start

  First of all you either have to import data or type them in.
 
  Before starting with Matrixer it is worthwhile to understand it's important feature. The data which are accessible by the program at any specific moment are situated in a single directory (working directory).
    On exit from the program all of the matrices which has been  created (except temporary) are automatically stored into the working  directory as files with extension .mat.
    On start of the program all of the files previously saved in  the working directory are automatically opened and are visible in the Menu of matrices.
    This feature explains why the program lacks traditional Open and Save menu items.

  After the data are prepared you can start working with them, for example, estimate a linear regression or draw a diagram.

How to import data

    The easiest way is to import data through Windows clipboard.
 
  Suppose your data are in Excel table    (or in any other table editor of a Windows program).  In this case see How to import data through clipboard  from Excel table.
 
It's a little more complex to import data through Windows clipboard  not from table editor (WordPad, web page and the like). Such data could be  separated not by tab symbols, but by spaces or commas, and can also contain  some text. If you have such data then see How to import nonstandard data through clipboard
 
   It is also possible to import such kind of data from text  file. See How to import text file.

          See also
       Quick start

How to import data through clipboard  from Excel table

  Select a rectangular block of numbers in Excel table.
  Copy data to Windows clipboard (Usually one can use  the following shortcuts Ctrl-INS or Ctrl-C).
  Create a new matrix in Matrixer by pressing in the  menu of matrices INS key and  providing an appropriate name.
  In Matrixer table editor  insert data from clipboard by pressing  Shift-INS.
  Close table editor saving created matrix.
 

    Remarks:
  You can find table editors in most of statistic  and econometric Windows programs. The procedure just described let you import data from any such program.
  Either comma or point can be a decimal separator symbol  (separatrix). Matrixer will convert them to points.
  If the first line of copied block contains  names of variables then Matrixer would name variables  (columns of matrix) accordingly.
  It is possible to insert a block from clipboard  into an existing matrix. If pasted block does not contain names then cells would be moved downward to create free space for the block. If the block contains names then new variables (columns of matrix)  would be created with these names. Existing columns would be moved  to the right.

          See also
       Quick start

How to import nonstandard data through clipboard

  Select in your program a fragment with the data.
  Copy the fragment to Windows clipboard.
  In Matrixer select menu item:
  Matrix > Import > from Windows clipboard
or press Alt-M key in menu of matrices.
  Provide appropriate name for the matrix to which  the data will be imported.
 
    Remarks:
  Read carefully the remarks to the page How to import text file.

          See also
       How to import data
       Quick start

How to import text file

  In Matrixer select menu item
  Matrix > Import > from file
or press Shift-Alt-M in menu of matrices.
  Select a file from which you want to import data.
  Provide an appropriate matrix name to import data.
 
    Remarks:
  Use options
  Matrix > Import > Options
to control how data would be imported.
  The most important option is "Presume columns of fixed width". It permits to import formatted data separated by spaces. An example is
  Animal                            Y        X
  Rhesus monkey                 6.800  179.003
  Kangaroo                     34.998   56.003
   If this option is not checked (which is the default) then data fields are assumed to be separated by some separator symbol like comma, TAB, space, etc. An example is
  "ARGENTINA","Machinery",480,703,599
  "ARGENTINA","Business Construction",1403,2057,"NA"
   If commas are used as separators this format is often called  CSV (Comma Separated Value) format. Variants of this format  can also use ; or | as separators.
    Depending on options commas could be interpreted as  separators between numbers or as decimal points.
  Use "Substitutions 'Text->Number'" option to provide numerical values for nonnumeric fields like  "Male"/"Female". For missing values (like NA) use  8934567.
  Text will be placed in comments. Uncheck the corresponding checkbox to prevent this.
  In order to import variables names press Ctrl-N  in table editor and paste variables names from clipboard  (you should copy them to clipboard beforehand).
  Use table editor to edit created matrix.  Delete from the matrix everything that program inserted into  it by mistake. As a result of import, non-numerical data  would be replaced by missing values, which in  the table editor would look like *-**-*.
  Don't expect good result if data are too nonstandard.
  Often it is useful to edit data before importing them.
  The program can not import data directly from non-text  (binary) file, for example, from an Excel file. Use CSV format  to export data from Excel.

          See also
       How to import data
       How to import nonstandard data through clipboard
       Quick start
       Import

How to type in data

  Open an existing matrix in the table editor or create a new matrix by pressing INS  key in the menu of matrices and providing an  appropriate name.
  After entering your data close table editor and  save the matrix.
 
    Hints:
  It may be useful to switch on insert mode (Ctrl-I key) when typing numbers inside an  existing matrix.
  When typing numerical data it is convenient to use numerical keypad at the right of a keyboard.  Num Lock must be on.
  Missing value can be typed using "*" symbol.
  Before entering data using keyboard think of  import possibility.
  When entering huge data set don't forget to save it from time to time (Ctrl-S key).

          See also
       Quick start

Missing values

    By missing values we mean gaps in data. Matrixer is able to  treat data with missing values in most procedures. Internally  and in the text format of a matrix the number  8934567 is used for representing a missing value. A missing value is shown on the screen as "*-**-*". In order to write a missing value in formulas  scalar @missing (or @na) is  used.
 
  Observations with missing values are dropped  from models and plots.
  In time series models (like ARMA) only adjacent  observations without missing values are used.
  If function argument does not belong to it's  domain (for example: sqrt(-1), ln(0), 1/0)  then the result would be a missing value.
  The result of the operation in which one of  the arguments is missing value will also be a missing value. For example:
    100/0+200.
  In the table editor missing values  could be typed using "*" symbol.
 

How to estimate a linear regression

    There are several different ways to estimate a linear regression.  Choose what you like:
   Use interface with buttons and menus.
   Quick and efficient estimation of a  regression.

          See also
       Quick start

How to estimate a linear regression - 1

  Select in the main menu
  Panels > Linear regression .
  Press the first button "Choose" and choose  dependent variable.
  (See How to choose a variable)
  Press the second button "Choose" and choose regressors.
  Press "Run" button (the button with triangle).
 
     Remarks:
  "Weights" input line is used when estimating weighted regression.  Leave it empty to estimate ordinary regression.
  Corresponding command would be written to the  command window. You can use it to  make changes and estimate regression again quickly.
  Write the command to history of commands if you  want to estimate the same regression again later. 

       See also
   Quick start
   How to estimate regressions quickly and efficiently
   Estimation of linear regressions

How to choose a variable

    This page explains how to choose a variable or a group of variables in "Choose variables" window.
 
    At the left side of this window there is a list of  variables (vectors) in the working directory. Near  each variable there is a number indicating it's length.  (When speaking about variables we mean both columns  of matrices and one-column matrices).
    At the right side there is a list of selected variables (vectors). To add the variable to the right list drag it using mouse or press the  button with arrow.
    To delete a variable press the button with cross.
 
  Input line at the bottom of the window is used  for typing formulas. The rules of typing formulas are given in  the Formulas  section.
  You can add a constant term or a time trend  by pressing the corresponding button.

       See also
   Quick start

How to estimate a linear regression - 2

    This section explains how to run regressions from  the command window.
 
  If command window is not empty you can clear  it using F10 key or by pressing the button  with cross to the left of the command window.
  Drag dependent variable from the  menu of matrices (or variables)  to the command window using mouse. You  could also press Ctrl-ENTER in the matrices  (or variables) menu and the name of selected variable would be inserted into the command window.
  In the command window type ":" symbol  after the dependent variable. This symbol separates  left-hand side and right-hand side of linear regression.  There may be spaces before this separator or after it. 
  Type one ("1") in the command window  after the ":" symbol. One corresponds to the  constant (intercept) term of a linear regression.  If you do not need a constant then do not type  one (but are you really sure that you don't need  a constant in your regression?)
  Add to the command window names of regressors  (explanatory variables). You can do it using mouse  or Ctrl-ENTER key.
  After that you will get in the command window  something like this:
      y : 1 x1 x2 x3
   Now run the command by pressing the button with triangle  to the left of the window or press Shift-ENTER key.
  If the program responds with an error message, reread this page and try to understand what you've done wrong.
 
    Remarks:
  You can use formulas in regression commands,  for example,
      ln(y)+10 : 1 exp(x1)/2 x2+x3
  Regressors are separated by space characters so be  careful when using spaces in formulas.
  Matrixer keeps previously started commands.  To return a previous command invoke the history of commands  using the corresponding button to the left of the  command window or Alt - <- key. Often it is easier  to edit an old command then to create a new one.

          See also
       Quick start
       How to estimate a linear regression - 1.

How to draw a diagram

   There are three different ways to draw a diagram in the program Matrixer.
  Use command window. (Details)
  Use menu item Show. (Details)
  Use menu item Panels. (Details)

          See also
       Quick start
       Graphs

How to draw a diagram using command window

  If command window is not empty you can clear  it using F10 key or by pressing the button  with cross to the left of the command window.
  Write in the command window one of the commands plot!, scatter!, xyplot!, timeplot! depending on the type of the diagram which is required to draw (see Graphs).
  If command scatter! or xyplot! is used then drag X-axis variable from the  menu of matrices (or variables)  to the command window using mouse. You  can also press Ctrl-ENTER in the matrices  (or variables) menu and the name of selected variable would be inserted into the command window.
  Add to the command window names of Y-axis  variables. It could also be done using mouse or Ctrl-ENTER key.
  Variables are separated by spaces.

          See also
       Quick start
       How to draw a diagram
       Graphs

How to draw a diagram using menu item "Show"

  Choose a variable in the menu of matrices (or variables).
  Choose one of the following menu items
   Show > Plot
   Show > Scatter
   Show > XY plot
  A small panel will appear which is used for adding other variables.
  To add another variable, drag it from the  menu of matrices (or variables) to the panel using mouse. 
  The other way to add variable is to choose it in the menu of matrices (or variables) and press "Add" button on the panel.
  Press ENTER or "OK" button on the panel.

          See also
       Quick start
       How to draw a diagram
       Graphs

How to draw a diagram using menu item "Panels"

  Choose menu item
   Panels > Plot
  Choose the type of X-axis, that is, "Observation number", "Variable" or "Time".
  If the type of X-axis is "Variable" then press "Choose" button and select X-axis variable.
  (See How to choose a variable)
  Press "Choose" button at the right part  of the panel and choose Y-axis variables.
  Press "Run" button (the button with triangle).
 
     Remarks:
  Corresponding command would be written to the  command window. You can use it to  make changes and draw a diagram again quickly.
  Write the command to history of commands if you  want to draw the same diagram again later.
  The scale of Y-axis (and X-axis as well) can be chosen to be logarithmic. In order to do this, check the corresponding checkbox.
  It is possible to choose the appearance of any  Y-axis variable. By default the points are connected by lines. Also "Stars" and "Bars" can be chosen. In order to do this, check the corresponding checkboxes.

          See also
       Quick start
       How to draw a diagram
       Graphs

Program Matrixer

   The program could be used for data analysis, econometric  and statistical calculations.
   With the  program you can estimate (and test hypothesis  about) the following models
   linear regression,
   non-linear regression,
   binomial logit and probit,
      and other models.
   Matrixer works with the objects of the following types:
   matrices
   variables (columns of matrices)
   scalars
   strings
   models
     Matrixer is operated using
   menus and hot-keys (for example, "Panels" menu)
   commands started from  command window
   macros (blocks of commands).

Matrices

    Numerical data in the program Matrixer are stored as matrices. Every matrix has a name.
    Columns of matrices are called variables  and also can have names. A matrix consisting of one column can be treated as a variable and, conversely, a  variable can be treated as a matrix consisting  of one column.
    Matrix may contain text along with numerical data  (comments).
    Menu of matrices is used for working with  matrices. This menu lists names of matrices available in the current  working directory along with their dimensionality. Table editor is used for  viewing and editing data contained in matrices.
    A matrix could be either temporary or permanent.  Names of temporary matrices start with "#"  character. Temporary matrices are automatically erased after  program termination.  One can use matrix names starting with "#"  to be able to clear current directory of unnecessary  files quickly. To erase temporary files press Alt-E  in the menu of matrices or command window.
    A matrix can be kept on disk (as a file) or in RAM.  While the program is running all newly created matrices  are kept in RAM because this is faster. It is possible to convert a  matrix from one format to the other by pressing  Shift-SPACE in the menu of matrices. This helps to preserve data in case of possible program failure. When the program is terminated all files are automatically saved to disk. (See Text format of a matrix)
    There is also a special kind of matrices called  model matrices.

          See also
   Variables
   Scalars

<Comments>

    Comments are a text, which explains the nature  of numerical data contained in the matrix. Comments  are saved and edited together with the matrix and  are actually its part.

Variables

    Variables are the columns of matrices. A variable is fully specified as follows
         <name of matrix>[<name of variable>].
    It is also possible to use a column number ( [<number>])  instead of variable name. For example, data[3] means  "the 3-d column of the matrix "data" ".
    Menu of variables is used for handling variables. Handling variables in similar to handling  matrices (see, for example, section Assignment commands).
      A matrix consisting of one column can be treated as a variable and, conversely, a  variable can be treated as a matrix consisting  of one column.

          See also
   Matrices
   Scalars

Scalars

    Scalars (along with matrices) contain  numerical data with which the program operates.  Scalars are  stored only during operation of the program.
    Scalars can replace constants in scalar  expressions, formulas and matrix expressions. Scalar names start with symbol  @. Scalar can be created as a result of assignment command.
    There is a "Scalars" window to look through and edit  the scalars created during the work or create new scalars (the hot key is Ctrl-S).
    Scalar @pi denotes the number "pi"; scalars  @missing and @na denote missing value.  Scalar @timer contains current value of time counter in seconds.

   Example:
    @A := 16; @n := 4;
    x{1..100} := @A*sin(2*@pi*$i/@n);
    y := if(x>=0,x,@missing);

    There is also a special kind of scalars called  model scalars.

          See also
   Scalar expressions
   Matrices
   Variables
   Parameters

Text format of matrix

   Matrix in the text (human-readable) format  is a file with extension .mat. It can be viewed  and edited in ordinary text editor.  So a simple way to import Matrixer data into other program is to take data from .mat file.
   The file has the following form
   The first line contains matrix dimensionality, that is, number of rows and columns separated by space.
   After this comment lines  may follow. Each comment line starts with // .  The comments may be absent.
   The line, which follows after the comments,  contains variables names separated by spaces. If a  variable has no name then # character is placed instead  of its name. If the line is empty the variables are  treated as unnamed. However the line must be present,  otherwise the first row of numbers would be  treated by the program as a string of names.
   The file is finished by the data  matrix itself. Each row of the matrix is one  separate line of the file. The numbers in  each line are separated by spaces. The format  is free. Missing data are represented by the number 8934567 (missing value).

   An example of file contents:
-----------------------------------
2 3
// Here may be comments
x  y z
2.3  4.2 -99
1E-10 8934567 5
-----------------------------------

Menu of matrices (variables)

    Menu of matrices lists the names of matrices in the current working directory.  The number of rows and the number of columns for each matrix  are shown beside the matrix name. The number of rows  and the number of columns are separated by  a symbol, which indicate the format of the matrix.
 
    Menu of variables lists the names of variables for the matrix, which is selected in the menu  of matrices (switch to variables menu to see the list of  variables).
 
    To handle matrices (variables) from the menu of  matrices (variables), use the main menu or hot keys.  Below the most important hot keys are listed.  
 
  ENTER - view and edit current matrix  (See table editor for matrix)
  ->  switch from menu of matrices to menu of variables
  <-  switch from menu of variables to menu of matrices
  TAB switch to command window
  INS insert a new matrix
  DEL delete current matrix (variable)
  Alt-N rename current matrix (variable)
  Alt-C copy current matrix (variable)
 
    Hot key Ctrl-ENTER inserts the  name of current matrix (variable) into command window
   To edit current matrix press ENTER or double-click  using mouse.

     See also
   Description of the program Matrixer

<Working directory>

    The files with the data, which the program currently  works with, are stored in the working directory:  files of matrices, files of macros, history of commands etc.
   Working directory could be selected from the main menu:
   Preferences> Directory

Models

   Model as an object is created after estimation of some econometric model.
   Tables and plot could be viewed calling menu "Estimation results" from upper menu "View" or running results! command. Window  "Estimates and statistics" is called from menu "Estimation results" or  running esttable! command.
   Model includes model matrices and model scalars. These could be saved in window "Model data" (menu "View").

     See also
   Commands

Model matrices

   Model matrices is a special kind of matrices. They are  created as a result of model estimation. Name of model  matrix begins with \ symbol. Some typical model matrices:
   \Thetas  parameters vector,
   \Resids  residuals,
   \Fitted  fitted values.
 
     Example:
    b == \Thetas

     See also
   Model scalars

Model scalars

   Model scalars is a special kind of scalars. They are  created as a result of model estimation. Name of model scalar begins with \@ symbols. Some typical model scalars:
   \@LL  loglikelihood,
   \@RSS  residual sum of squares.
 
     Example:
    @RSS1 := \@RSS

     See also
   Model matrices

Strings

    Matrixer has some string handling capabilities. Strings  are  stored only during operation of the program.
    String names start with symbols s_. String can be created as a  result of assignment command.
    String expression is a mixture of text and scalar expressions. The text is marked out by quotation marks ("<text>"). Individual ASCII symbols could be  addressed as s_<n>, where n is code of a symbol. Space is used as  divider in string expressions.

   Example:
    s_a := "Factorials: ";
    s_path := "C:\Windows\Temp\";
    for! @i 1 5;
      s_f := @i "!=" exp(lngamma(@i+1));
      s_a := s_a s_f  s_32 " ";
      print! (s_path "tempfile.txt") s_f;
    endfor!;
    wait! s_a;

    There is a "Strings" window to look through and edit  the strings created during the work or create new strings (the hot key is Ctrl-T).

     See also
   Scalars
   Messages and signals in macros
   Other commands
   File names

File names

    File names are used in several commands (print!, list!, import!, esttable!, logfile!, external!). 
    File name could be given either as a sequence of symbols without spaces,  or as a sequence of symbols in quotes, or as a string  expression in parentheses.

   Examples:
    esttable! outputfile.txt;
    import! pr2 "C:\Program Files\Matrixer\Examples\prime.mat";
    s_path := "C:\Windows\Temp\";
    print! (s_path "tempfile.txt") "Test message";

     See also
   Strings
   Other commands
   Import

Command window

    Command window is used for editing and running a single command or a block of commands (macro).

    Some commands:

    Element-by-element assignment (see Formulas):
    <matrix assignment result> := <formula>
    Matrix assignment (see Matrix expressions):
    <matrix assignment result> == <matrix expression>
    (See also Assignment commands)

       Linear regression estimation:
    <dependent variable> : <list of regressors>
      (See also How to estimate a linear regression - 2)
  
       Non-linear regression estimation:
    nls! <dependent variable> : <formula>
      (See Econometric models estimation)

        Plot:
    plot! <variables list>
        Histogram:
    hist! <variable>
      (See Graphs and How to draw a diagram using command window)
 
   Remarks:
  If program does not recognize some other command then the contents of command window would be considered as scalar expression. The resultant  number is shown to user. This is a calculator mode. For example, one may write 2*2 in the command window and run this as a command. The result would be 4.
  Note that often an error message  is a result of incorrect arrangement of spaces.
  Double-click symbol or word to see  a hint for it.
  Commands, functions and other special identifiers are highlighted. Use this to check spelling.
 
   Hot keys:
   Shift-Enter Run the command from the command window
   Alt - <- History (previous commands)
   F10 Clear window
   TAB Switch to menu of matrices

     See also
   Commands
   Macros (blocks of commands)
   Description of the program Matrixer

Assignment commands

  Element-by-element assignment:
    <matrix assignment result> := <formula>
or
    <matrix assignment result> <coverage of observations> := <formula> 

  Matrix expression assignment:
    <matrix assignment result> == <matrix expression>
or
    <scalar name> == <matrix expression>

  Scalar expression assignment:
    <scalar name> := <scalar expression>

  String expression assignment:
    <string name> := <string expression>


     See also
   Commands
   Formulas
   Coverage of observations
   Matrix expressions
   Matrix assignment result
   Scalars
   Scalar expressions
   Strings

Functions

   Functions are used in formulasscalar expressions and matrix expressions.
 
   There are following types of functions in Matrixer.
 
  Matrix functions
  Ordinary (element-by-element) functions
  Scalar functions of matrices
  Dynamic functions
  Submatrix functions-2
  Functions for evaluating formula and its derivatives with respect to parameters

     See also
   Assignment commands

Ordinary (element-by-element) functions

  Ordinary (element-by-element) functions are used both in  formulas,  and in matrix expressions
 
The major categories of ordinary functions are
 
  Elementary functions 
  Special functions 
  Indicator and logical functions 
  Distribution functions
  Additional distribution functions
  Random number generation 
 

     See also
   Matrix functions
   Scalar functions of matrices
   Formulas
   Assignment commands
   Commands
   Macros (blocks of commands)

Matrix functions

  Matrix functions are used in matrix  expressions. The major categories of matrix functions are
 
  Matrix algebra 
  Submatrix functions 
  Various transformations 
  Creation of some special matrices 
  Other matrix functions 
 

     See also
   Matrix decompositions
   Ordinary (element-by-element) functions
   Scalar functions of matrices
   Formulas
   Assignment commands
   Commands
   Macros (blocks of commands)

Formulas

    Formulas are similar to scalar expressions, but are used for different purposes:
  Make calculations element-by-element and dynamically and assign  the result to matrix (variable).
  Specify variables in models.
  Specify nonlinear functions with parameters (when estimating  nonlinear models, testing hypotheses).
 
   Formulas are also similar to matrix expressions. The difference is that in ordinary formulas calculations are  made only element-by-element. For example, the result of operation max(x,y) where x and y are two vectors (not necessarily of the same length) would be a vector with elements equal to  max(x(i),y(i)).
 
   Examples:
        ln(xx[income])+5;
        div(x,y)*y;

  Quasi variables in formulas could be created automatically using artificial variables (like $i or $m1,...,$m12) and random numbers (like ~n01 and ~u01, normal and uniform distributions)
  To include lag of a variable use <variable> [<positive integer number with '+' or '-' sign>]
  For complex formulas and dynamic modelling use substitutions and dynamic functions
  To include matrix expression in formula use any matrix function. In particular, "void" matrix function m() could be useful.

   Examples:
        x-x[-1]; (usage of lags, x[-1] is the 1-st lag of  variable x)
        TRADE<50; (create a dummy variable, which assumes the value of 1 if TRADE(i) is less then 50, and 0 otherwise)
        exp(~n01); (generate lognormal random variable)
        sin(@omega*$i); (sinusoid; $i is observation number)
        m(X.b)+~n01; (include matrix expression in formula)

    Element-by-element assignment command with formula:
    <matrix assignment result> := <formula>
or
    <matrix assignment result> <coverage of observations> := <formula> 
(See Coverage of observations)
 
   Example (generate random walk):
         x{1..1000}:=$l1+~n01
 
    It is possible (see Submatrix functions-2) to refer to matrix elements using
     <matrix name>@(<row>,<column>)
construct where row number and column number are specified as formulas (not as, e.g., scalar expressions). This feature could be used to manipulate matrix elements.

   Example (turn over matrix A, i.e. flip it horizontally and vertically):
          A {1..rows(A)}{1..cols(A)}
          := A@(rows(A)+1-$i,cols(A)+1-$j)

    Remarks:
  Spaces in formulas must used carefully. Particularly, if space is before addition or subtraction sign then put space after the sign too. (This restriction arises because in models lists of variables  consist of formulas separated by spaces). That is, use either
    1+2
or
    1 + 2
If assignment y:=x is made where y  is an existent vector of length 100 and x is a vector of length 10 then the assignment results is a vector of length 100 with the first 10 elements replaced by the corresponding elements of vector x.

          See also
   Parameters
   Assignment commands
   Matrix expressions
   Functions
   Matrices
   Variables
   Commands

Coverage of observations

    Coverage of observations indicates which rows and/or columns are used to  make calculations according to formula. The syntax is 
     {rows coverage}
or
     {rows coverage} {columns coverage}

    Both rows coverage and columns coverage consist of ranges separated by  commas. Each range is either a single number (scalar  expression), or two numbers, left bound and right bound, separated by two  points. Left and right bounds are scalar expressions

   Examples:
        GB[del_GDP]{1..99} := GB[GDP][+1]-GB[GDP]
        x{1..2,4,@n+1} := 0;
        A{1..10}{1..20} := $i/$j;


          See also
   Assignment commands
   Scalar expressions
   Formulas

Substitutions

    Substitutions are used in formulas to substitute for  repeating expressions. Syntax of substitution is
         >> <substitution variable> = <formula>
Substitutions follow the main formula.

   Example:
         p := exp(z)/(1+exp(z))
         >> z = @a0+@a1*x;
 
    Substitutions are also very useful for specifying dynamic models (see dynamic functions).

   Example (generate ARCH(1) series):
         x{1..100} := arch1
         >> arch1 = sqrt(h)*~n01
         >> h = if($i>1,
           @omega+@alpha*$lag(sqr(arch1)),
           @omega/(1-@alpha));
 
    Remarks:
  Do not put semicolon before >>.

          See also
   Assignment commands
   Maximum likelihood method
   Non-linear regression

Dynamic functions

    Dynamic functions are used in formulas. Dynamic functions together with substitutions  allows to work with dynamic models.

  Lag $lag or $lag<n>
  Differences $diff or $diff<n>
  Differences of logarithms $diffln or $diffln<n>
  Cumulative sum $csum or $csum<n>
  Self lag $l or $l<n> without brackets

   Example (generate AR(1) series):
         x{1..100} := ar1
         >> ar1 = if($i>1,@phi*$lag(ar1)+~n01,0);
or
         x{1..100} := @phi*$l1+~n01;
 

          See also
   Assignment commands
   Maximum likelihood method
   Non-linear regression

Artificial variables in formulas

    Artificial variables are used in formulas to denote  seasonal dummies, trends, etc.

  Observation number (row number) and its power $i, $i<n>.
  Column number and its power $j, $j<n>.
  Linear trend and its power $t, $t<n>.
  Monthly dummies $m1,...,$m12
  Quarterly dummies $q1,$q2,$q3,$q4
  Weekly dummies $w1,...,$w7


          See also
   Assignment commands

Submatrix functions-2

    Submatrix functions are used in formulas, scalar expressions è matrix expressions. Rows and columns are specified as scalar expressions (apart from function "element of a matrix" in formula where they are specified as formulas).

  Element of a matrix (analog of el() )
   <matrix name>@(<row>,<column>)

  Row of a matrix (analog of row() )
   <matrix name>@r(<row>)

  Column of a matrix (analog of col())
   <matrix name>@c(<column>)

  Submatrix (analog of submat())
   <matrix name>@sub(<top row>,<bottom row>, <left column>,<right column>)

  Diagonal of a matrix (analog of diag())
   <matrix name>@d()

  Vectorized matrix (analog of vec())
   <matrix name>@v()

    Examples:
      A@(3,1)
      A@c(3)
      A@d()


          See also
   Matrix functions
   Submatrix functions 
   Functions
   Matrices
   Matrix assignment result

Functions for evaluating formula and its derivatives with respect to parameters

  fu(<formula>,<vector of parameters values>) evaluate  formula as function of its parameters
  deriv(<formula>,<vector of parameters values>) evaluate  formula derivatives
  dderiv(<formula>,<vector of parameters values>,<direction  vector>) evaluate formula directional derivative
  dderiv2(<formula>,<vector of parameters values>,<direction  vector>) evaluate formula second directional derivatives

   Examples:
        FuVal == fu(%a+%b*X,-1|7)
        DD2 == dderiv2(exp(%a+%b*X^%c),0|1|2,1|2|3)

          See also
   Formulas
   Parameters
   Functions

Matrix expressions

    Matrix expressions have basically the same syntax as scalar expressions. Also in many situations matrix expressions are  similar to formulas and could be used to some extent for element-by-element operations. But their main role is to do various matrix operations such as matrix inversion and transposition  (see Matrix functions).
 
  Unary operations:
      ' transpose
      ~ inverse
 
   Binary operations:
      + sum
      - difference
      . matrix product
      * direct (element-by-element) product
      / direct division
      ' matrix product with transposition of the first matrix
      ~ matrix product with inversion of the first matrix
      & or space  horizontal concatenation
      | vertical concatenation
      : estimated coefficients for linear regression
      ? sorting matrix using vector
 
  Functions
      inv  matrix inverse
      diag  diagonal matrix
      inner  inner product
      sort  sorting
      onesvec  vector of ones
      acov  autocovariance function
      (See Matrix functions, and also Scalar functions of matrices) ,
 
   The format of matrix assignment command:
     <name of resultant matrix>==<expression>
 
    Matrix assignment:
    <matrix assignment result> == <matrix expression>
(see Matrix assignment result).
    Matrix expression could also be assigned to scalar:
    <scalar name> == <matrix expression>
 
    Examples:
      b == (x'x)~x'y;
      d == A.c+d|e-v[1]&w[xx];
      x == x1 x2 x3;
      @rss == inner(e);
 
    To include lag of a variable use <variable> [<positive integer number with '+' or '-' sign>]

    Remarks:
  In matrix expressions spaces are used as matrix concatenation operators so in other cases use them carefully. Particularly, if space is before addition or subtraction sign then put space after the sign too. That is, use either
    1+2
or
    1 + 2
  When binary arithmetic operations or element-by-element functions are used then requirements to matrix  dimensions are stricter than for formulas.

           See also
   Matrix decompositions
   Formulas
   Assignment commands
   Functions
   Matrices
   Variables

Matrix assignment result

    Matrix assignment result is used in assignment  commands. Rows and columns are specified as scalar  expressions.

  Entire matrix
   <matrix name>

  Variable
   <matrix name>@(<row>,<column>)
or
   el(<matrix name>,<row>,<column>)

  Row of a matrix
   <matrix name>@r(<row>)
or
   row(<matrix name>,<row>)

  Column of a matrix
   <matrix name>@c(<column>)
or
   col(<matrix name>,<column>)

  Submatrix
   <matrix name>@sub(<top row>,<bottom row>, <left column>,<right column>)
or
   submat(<matrix name>,<top row>,<bottom row>, <left column>,<right column>)

  Diagonal of a matrix
   <matrix name>@d()
or
   diag(<matrix name>)

  Vectorized matrix
   <matrix name>@v()
or
   vec(<matrix name>)

    Examples:
      A == onesmat(2,2);
      vec(A) == trend(4);
      A@(3,1) := 11;
      el(A,rows(A),2) := 17;
      A@c(3) := 1/$i;
      col(A,4) == trend(3);
      row(A,4) := -$j;
      A@d() := diagonal(A)+0.01;
      A[5] == A[1]-A[4];


           See also
   Assignment commands
   Matrix functions
   Submatrix functions 
   Submatrix functions-2
   Matrices
   Variables

Scalar expressions

  Arithmetic operations:
     + summation
     - subtraction
     * multiplication
     / division
     ^ raising to power

  Relational operators:
    = equal
    < less then
    > greater then
    <= less then or equal
    >= greater then or equal
    <> not equal
  The result of relational operation is: 1 (true), 0 (false)

  Some functions:
     ln natural logarithm
     exp exponent
     sqrt square root
     sqr square
     abs absolute value
(see Functions).

   Examples:
    @omega := 2*@pi*~u01;
    @y := if(@omega<@pi,cos(@omega),1/cos(@omega));
    @sum := 0;
    @n := rows(x);
    for! @i 1 @n-1;
      @sum := @sum+exp(x@(@i,1));
    endfor!;


           See also
   Matrix expressions
   Formulas
   Scalars

Table editor for matrix

   The table editor is used for typing data into newly created  matrices and to edit already existing matrices.
The table editor is like table editors of other programs,  e.g., Excel. But unlike Excel, the table editor of program  Matrixer is designed specially for editing numerical data.  I.e. only numbers are placed in the cells of the table  (one number in each cell).
 
   Remarks:
    Windows clipboard could be used for copying data from table editor of some other program to Matrixer table editor  and from Matrixer table editor to another program.
    Please, read hints about entering data. See  How to type in data.
    It is also possible to view and edit  matrix comments in the table editor.  In order to do this, switch to appropriate window using tabs below the table or F3 hot key.
 
   Hot keys:
 
    ESC  Close
    Ctrl  <- -> Move quickly horizontally
    ENTER, F2 or symbol  Edit cell
 
       While editing cell:
         "*"  Enter missing value
         ESC  Quit without saving
         ENTER  Save changes and quit
 
     DEL  Delete cell
     INS  Insert cell
     Ctrl-Y  Delete row
     Alt-DEL  Delete variable (column)
     Alt-INS  Add variable (column)
     Alt-N  Rename variable
     Ctrl-N  Rename all variables
     Ctrl-I  Switch insert mode
     F3 Switch to comments window

       See also
   How to type in data
   How to import data through clipboard  from Excel table.
   Matrices
   Variables

<Insert mode>

Insert mode allows to input data continuously in the interior of the matrix in the table editor. After finishing with  editing one cell another cell is inserted etc.  To stop such continuous data input press ESC.
    Insert mode is switched by Ctrl-I hot key.

Econometric models estimation

       Linear regression
       Non-linear regression
       Binomial logit and probit
       Regression with count dependent variable
       Regression with ordered dependent variable
       Tobit (censored regression)
       Truncated regression
       Regression with multiplicative  heteroskedasticity
       Regression with ARMA error
       Box-Jenkins model (ARIMA)
       GARCH regression
       (Generalized) instrumental variables method 
       Nonlinear instrumental variables method
       Maximum likelihood method
       Nonparametric estimation
       Quantile regression
       Simultaneous equations
       Vector autoregression
       ARFIMA-FIGARCH
       Forecasting
       Nonlinear function minimization

     See also
   Estimation results
   Statistical procedures

Statistical procedures

       Descriptive statistics
       Correlation matrix
       Autocorrelation function
       Histogram
       Spectral densitySpectrogram
       Dickey-Fuller test (ADF)
       Normal PP-diagram
 
      Other statistical and mathematical commands
      Matrix decompositions
 
   Some statistical procedures are implemented as functions. See, for example,
  Matrix functions: matrix algebra 
  Distribution functions
  Scalar functions of matrices

     See also
   Econometric models estimation
   Commands
   Estimation results

Estimation of linear regressions

    The easiest and fastest way to estimate  linear regression in Matrixer is to start corresponding  command from the command window.
    The command has the following form
    <dependent variable> : <list of regressors>
 
    Intercept term in the list of regressors could be  indicated as '1'.
    Regressor could be specified as a formula (see  Formulas).
    Weighted regression could be estimated by adding &/ <weights>.  It is assumed that the weights are proportional to the square root of variance.
 
    Examples:
     y : 1 x1 x2 x2[-1];
     ln(data[cons]) : 1 ln(data[gnp]*10) ln(data[gnp])^2;
     A[y] : 1 A[x] &/ A[w];
 
    After regression estimation user gets to a menu,  which allows to view and analyze the results.  Afterwards this menu can be called using Alt-R hot key.
 
    OLS coefficients could also be calculated as a result  of : operation in matrix expression
    Example:
        b==Y:X
Here Y is the dependent variable, X is a matrix of regressors.
    The same result could be achieved using the following matrix expression
         b==(X'X)~X'Y

     See also
   Non-linear regressions estimation
   Econometric models estimation
   Estimation results
   How to estimate a linear regression - 1.
   How to estimate a linear regression - 2.

Non-linear regressions estimation

    Non-linear regression can be estimated by running  from the command window a command,  which has the following form
      nls! <dependent variable> : <formula>
    The names of estimated parameters begin with % character.
 
    Options:
   &method <gnr|newton|gnrn|simplex|bfgsa|bfgsn|sa>
    numerical algorithm
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
    Default numerical algorithm is Gauss-Newton.  A handful of other algorithms are also available such as  Newton method, simplex method, etc.
 
    Examples:
     nls! D[Y] : %cnst + %a*D[Y] - %cnst*%a*z
     nls! usa[labor] : %c1 + exp(-%c2 + usa[unempl])

       See also
   Linear regression estimation
   Econometric models estimation
   Estimation results

Parameters

    Parameters are used in non-linear models (mle!, nls!, nliv!, min!) to denote the scalar variables to be estimated. Parameter name begins with symbol %

    Other uses:
  Functions for evaluating formula and its derivatives with respect to parameters
  Restrictions (functions of parameters)

       See also
   Formulas
   Econometric models estimation

<Gauss-Newton method>

    Gauss-Newton method is used for obtaining least squares  estimates in a non-linear regression.  Its basic principle is linearization of regression function.

Logit and probit

    Matrixer is able to estimate binomial logit or  probit.
    Command for estimating logit has the following form
    logit! <dependent variable> : <list of regressors>

    Example:
    logit! y: const x1 x2

    Command for probit has the following form
    probit! <dependent variable> : <list of regressors>
 
The dependent variable in these models must consist of zeros and ones.
     See also
   Linear regression estimation
   Econometric models estimation
   Estimation results

<Logit and probit>

  Logit and probit are kinds of regression model  with discrete (qualitative) dependent variable.  Binomial probit and logit are models in which dependent variable assumes two values (0 and 1).
    Logit corresponds to logistic distribution.
    Probit corresponds to normal distribution.

Maximum likelihood method

    Command for maximum likelihood estimation has the following form
    mle! <formula>
    The formula must contain a contribution of one typical observation to the loglikelihood function. The names of  estimated parameters start with %  character.

    Options:
   &method <newton|bfgsa|bfgsn|simplex|bhhha|bhhhn|sa>
    numerical algorithm
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
    Default numerical algorithm is Newton.  A handful of other algorithms are also available such as  BHHH (OPG), simplex method, etc.
   Example:
    mle! -1/2 * (ln(2*@pi) + %ls2)
       - sqr(y - %a - %b*x) / 2 / exp(%ls2)

    Example shows how a simple regression y(i) = a + b x(i)  could be estimated by maximum likelihood. Parameter %ls2 here corresponds to logarithm of variance.  @pi is number pi.

   See also
   Econometric models estimation

<BHHH (OPG) method>

    The method BHHH (OPG) is used for obtaining maximum likelihood estimates. Its main principle is  linearization of the loglikelihood function.  It uses the matrix of contributions of observations to  the gradient of loglikelihood function  and requires only first derivatives.
    Usually BHHH is slow and give an inaccurate estimate of variance-covariance matrix of parameters.

<Newton method>

   Newton's method is a general method for unconstrained nonlinear optimization. It uses the gradient (vector of the first derivatives) and the Hessian (vector of the second derivatives) of the objective function.

<Simulated annealing>

   Simulated annealing is a general method for unconstrained nonlinear optimization. It does not require derivatives. SA is a brute force extensive random search method and is recommended for nonsmooth functions and functions with multiple local optima. It is generally too slow.
   Matrixer uses a version of SA algorithm due to Goffe et al. (Goffe, Ferrier and Rogers, "Global Optimization of Statistical Functions with Simulated Annealing," Journal of Econometrics, vol. 60, no. 1/2, Jan./Feb. 1994, pp. 65-100). The algorithm was modified by allowing temperature to grow after successful iterations. This makes SA much more robust and self-adjustable.

Regression with multiplicative  heteroskedasticity

   The command for estimating a regression with multiplicative heteroskedasticity has the following form
    mhetero! <dependent variable> : <list of regressors> : <heteroskedasticity regressors list>
     Example:
    mhetero! y : 1 x1 x2 x3 : 1 z1 z2

   See also
   Linear regression estimation
   Econometric models estimation

<Regression with multiplicative  heteroskedasticity>

   In a regression with (linear) multiplicative heteroskedasticity  the error variance is equal to exp(Z(i)a).
   Here Z is a matrix made of the variables, which influence error variance (generally this matrix should contain  a column of ones), a is a vector of heteroskedasticity  parameters.
   Otherwise regression with multiplicative heteroskedasticity is the same as linear regression.

Regression with count dependent variable

   The command for estimating regressions with count dependent variable has the following form
  Poisson regression    poisson! <dependent variable> : <list of regressors>
  negative binomial regression (NegBin-2)    negbin! <dependent variable> : <list of regressors>
     Example:
    poisson! count : 1 x1 x2 ln(x3)

   See also
   Linear regression estimation
   Econometric models estimation

<Regression with count dependent variable>

Regression with count dependent variable is a variant of regression model for count data. Dependent variable in this regression is  discrete (non-negative integer). The simplest regression of this kind is Poisson regression based on Poisson distribution. To take into account overdispersion (variance greater than mean) negative binomial regression based on negative binomial distribution is used, which is an extension of Poisson regression. In negative binomial model heterogeneity is introduced into Poisson distribution with the help of Gamma distributed multiplier.

Regression with ordered dependent variable

   The command for estimating probit regression with ordered dependent variable  has the following form
    ordered! <dependent variable> : <list of regressors>
 
    Options:
   &method <newton|bfgsa|bfgsn|simplex|bhhha|bhhhn|sa>
    numerical algorithm (newton is default)
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
     Example:
    ordered! label : 1 income sex

   See also
   Linear regression estimation
   Econometric models estimation

<Regression with ordered dependent variable>

   Regression with ordered dependent variable is a variant of regression model with qualitative dependent  variable. Only ordering is important in the dependent  variable, not it's value.

Tobit (censored regression)

   The command for estimating tobit  regression model has the following form
    tobit! <dependent variable> : <list of regressors>
   By default zero is used as the left limit.
 
    Options:
   &llimit <number>
    left limit
   &rlimit <number>
    right limit
 
     Example:
    tobit! y_cens : 1 x1 x2
    tobit! y_2cens : 1 x &llimit 1 &rlimit 100
 
    Parameter %h in "Estimates and statistics" table equals 1/se where se is regression standard error.

   See also
   Linear regression estimation
   Econometric models estimation
   Truncated regression

<Tobit (censored regression)>

   Tobit (censored regression) is a regression model in which dependent variable is censored, that is, dependent variable is not observed if it is less than (or greater than) some limit. Regressors for the limited observations are observed (unlike truncated regression model).
   A typical example is a model with left censoring at zero. Initial model is
   y*(i) = X(i)b + e(i)
Variable y(i) is observed instead of y*(i), where
   y(i)=0 if y*(i)<0
and
   y(i)=y*(i) if y*(i)>=0.

Truncated regression

   The command for estimating truncated  regression model has the following form
    truncreg! <dependent variable> : <list of regressors>
   By default zero is used as the left limit.
 
    Options:
   &llimit <number>
    left limit
   &rlimit <number>
    right limit
   &method <newton|bfgsa|bfgsn|simplex|bhhhn|sa>
    numerical algorithm (newton is default)
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
     Example:
    truncreg! y_trunc : 1 x1 x2
    truncreg! y_2trunc : 1 x &llimit 1 &rlimit 100
 
    Parameter %h in "Estimates and statistics" table equals 1/se where se is regression standard error.

   See also
   Linear regression estimation
   Econometric models estimation
   Tobit (censored regression)

<Truncated regression>

   Truncated regression is a regression model in which observation is missing if dependent variable  is less than (or greater than) some limit.  Both dependent variable and regressors for the limited observations are not observed (unlike tobit model).

Regression with ARMA error

   The command for estimating regression with ARMA error has the following form
    arma! (<p>,<q>) <dependent variable> : <list of regressors>
   Here (p,q) is the order of the ARMA process for error term.
 
    Options:
   &estimator <css|mle>
    estimator: conditional sum of squares or (default) exact maximum likelihood
   &method <gnr|gnrn|bfgsn|simplex|sa>
    numerical algorithm (gnr|gnrn for css only)
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
   &irfhorizon <positive integer number>
    number of values of impulse responce function
 
     Example:
    arma! (1,1) y : 1 x

   See also
   Linear regression estimation
   Box-Jenkins model (ARIMA)
   Econometric models estimation

<Regression with ARMA error>

   Regression with ARMA(p,q) error is given by the following equations
 
   y0(t)=mu+phi_1*y0(t-1)+...+phi_p*y0(t-p)+eps(t) +theta_1*eps(t-1)+...+theta_q*eps(t-q),
  y0(t)=D(d)y(t),
 
   Here p is the order of autoregression, q is the order of moving average.

Box-Jenkins model (ARIMA)

   The command for estimating ARIMA model  has the following form
    boxjen! (<p>,<q>) <variable>
   Here (p,q) is the order of ARMA process.
 
    Options:
   &d <integer number>
    order of integration
   &estimator <css|mle>
    estimator: conditional sum of squares (default) or exact maximum likelihood
   &method <gnr|gnrn|bfgsn|simplex|sa>
    numerical algorithm (gnr|gnrn for css only)
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
   &fhorizon <positive integer number>
    forecast horizon
   &irfhorizon <positive integer number>
    number of values of impulse responce function
 
     Example:
    boxjen! (1,1) yield

   See also
   Linear regression estimation
   Regression with ARMA error
   Econometric models estimation

<Box-Jenkins model (ARIMA)>

   ARIMA(p,d,q) process is given by the following equations
 
   y0(t)=mu+phi_1*y0(t-1)+...+phi_p*y0(t-p)+eps(t) +theta_1*eps(t-1)+...+theta_q*eps(t-q),
  y0(t)=D(d)y(t),
 
   Here D(d) is d-th difference operator, p is the order of autoregression, q is the order of moving average, d is the order  of integration.

GARCH regression

   The command for estimating GARCH regression  has the following form
    garch! (<p>,<q>) <dependent variable> : <list of regressors>
or
    garch! (<p>,<q>) <dependent variable> : <list of regressors>  & <list of heteroskedasticity regressors>
   Here (p,q) is the order of GARCH process. In most cases it is enough to take p=1 and q=1.
 
    Options:
   &distr <normal|tstud>
    distribution of disturbances
   &garchm <none|log>
    variance in mean
   &hetff <var|logvar>
    variance functional form
   &stabpar <number>
    stabilization parameter
   &method <scoring|bfgsa|bfgsn|simplex|bhhha|bhhhn|sa>
    numerical algorithm
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
     Example:
    garch! (1,1) price : 1 price[-1] price[-2]

   See also
   Linear regression estimation
   Econometric models estimation

<GARCH regression>

   GARCH regression is a kind of regression in which the error term constitute a random process of GARCH type (autoregressive conditional heteroskedasticity). In such a regression the variance of error for the i-th observation depends on squared errors of previous observations (i-1, i-2, etc.)

(Generalized) instrumental variables estimator

    The command for estimating regression by the instrumental variables method has the following form
    iv! <dependent variable> : <list of regressors> : <list of instruments>
     Example:
    iv! p : 1 q : 1 q[-1] p[-1] z

   See also
   Linear regression estimation
   Simultaneous equations
   Nonlinear instrumental variables method
   Econometric models estimation

Nonlinear instrumental variables method

   The command for estimating regression by the nonlinear  instrumental variables method has the following form
     nliv! <left-hand side formula> : <right-hand side formula> : <list of instruments>
 
     Example (nonlinear consumption function):
    nliv! C : %a+%b*Y^%c : 1 C[-1] Y[-1] C[-2] Y[-2]
   The lags of consumption C and income Y serve as instruments  here.
 
     Example (Box-Cox regression):
    nliv! boxcox(Wage/exp(mean(ln(Wage))),%lambda)
   : %c+%c_ed*Ed+%c_ex*Ex+%c_exsq*Ex^2+%c_race*Race+%c_sex*Fe
   : 1 Ed Ex Ex^2 Race Fe Ed^2 Ex^4 Ed*Ex Ed*Ex^2
   All regressors and some of their cross-products serve as instruments  here.

   See also
   Non-linear regressions estimation
   (Generalized) instrumental variables estimator
   Econometric models estimation

Nonparametric estimation

       Kernel regression
       Polynomial regression
       Cubic spline
       Kernel density estimation
       SNP density estimation (Hermite series)

   See also
   Econometric models estimation

Kernel regression

   The command for estimating kernel regression has the following form
    kernelreg! <dependent variable> : <explanatory variable>
 
    Options:
   &kernel <epanechnikov|gaussian|rectangular|
     triangular|quartic>
    type of kernel (default is epanechnikov)
   &smoothing <number>
    smoothing parameter, bandwidth
 
     Examples:
    kernelreg! y : x
 
    kernelreg! DATA[speed] : DATA[distance]
    &kernel gaussian &smoothing 1E-2

   See also
   Econometric models estimation
   Nonparametric estimation

Polynomial regression

   The command for estimating polynomial regression has the following form
    polynom! <dependent variable> : <explanatory variable>
 
    Options:
   &smoothing <number>
    smoothing parameter, degree of polynomial plus one
 
     Examples:
    polynom! y : x
 
    polynom! DATA[speed] : DATA[distance]
    &smoothing 6

   See also
   Econometric models estimation
   Nonparametric estimation

Cubic spline

   The command for estimating regression using cubic  spline has the following form
    spline! <dependent variable> : <explanatory variable>
 
    Options:
   &smoothing <number>
    smoothing parameter
   &smoothing0 <number>
    starting smoothing parameter
 
     Examples:
    spline! y : x
 
    spline! DATA[speed] : DATA[distance]
    &smoothing0 10

   See also
   Econometric models estimation
   Nonparametric estimation

Kernel density estimation

   The command for kernel density estimation has the following form
    kernel! <variable>
 
    Options:
   &kernel <epanechnikov|gaussian|rectangular|
     triangular|quartic>
    type of kernel (default is epanechnikov)
   &smoothing <number>
    smoothing parameter, bandwidth
 
     Examples:
    kernel! x
 
    kernel! ln(SPAIN[Income])
    &kernel quartic &smoothing 1E-2

   See also
   Econometric models estimation
   Nonparametric estimation

SNP density estimation (Hermite series)

   The command for seminonparametric density estimation has the following form
    hermite! <variable>
 
    Options:
   &smoothing <number>
    smoothing parameter, degree of polynomial plus one
 
     Examples:
    hermite! x
 
    hermite! ln(BONDS[Yield]) &smoothing 4

   See also
   Econometric models estimation
   Nonparametric estimation

Quantile regression

   The command for estimating quantile regression has the following form
    qreg! (<p>) <dependent variable> : <list of regressors>
 
    Options:
   &prob <p>
    p from (0,1) interval is the probability for the quantile
 
     Example:
    qreg! demand : 1 income &prob 0.75
 
    Regression for 0.5 quantile (which corresponds to the median) could be estimated by dropping &prob option
 

   See also
   Linear regression estimation
   Econometric models estimation

Simultaneous equations

    The command for estimating simultaneous equations has the following form
  fiml! <list of endogenous variables> : <list of exogenous variables>
 
    This command is used for estimating simultaneous equations by the full information maximum likelihood method (FIML). Two-stage least squares and three-stage least squares could also be used. To do this one have to replace fiml! in the command given above by  2sls! and 3sls! respectively.
    After running the command a screen would be shown were user can mark those variables, which appear in the equations. The program would automatically check whether the system is identified and show a warning if it is not.

    Options:
   &ypatt <pattern matrix>
    inclusion/exclusion of endogenous variables
   &xpatt <pattern matrix>
    inclusion/exclusion of exogenous variables

   In pattern matrix 0 means excluded variable

   See also
   Linear regression estimation
   Econometric models estimation
   (Generalized) Instrumental variables method

<Quantile regression>

   Quantile regression is said to be a robust estimation technique. Unlike the ordinary regression quantile regression estimates one of the quantiles of dependent variable, not it's mean.
   0.5 quantile corresponds to the median. Median regression coincides with the method of least distances. The estimates are obtained by minimizing the sum of absolute deviations (not the sum of squared deviations as in the method of least squares).

Vector autoregression

    The command for estimating vector autoregression has the following form 
  var! (<order>) <list of endogenous variables> : <list of exogenous variables>
 
   After running the command a screen would be shown were user can mark those variables, which appear in the equations.

    Options:
   &ypatt <pattern matrix>
    inclusion/exclusion of endogenous variables
   &xpatt <pattern matrix>
    inclusion/exclusion of exogenous variables
   &covpatt <pattern matrix>
    restrictions on error covariance matrix
   &irfhorizon <positive integer number>
    number of values of impulse responce function
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations

   In pattern matrix 0 means excluded variable (zero coefficient)

   See also
   Simultaneous equations
   Regression with ARMA error
   Box-Jenkins model (ARIMA)
   Econometric models estimation

ARFIMA-FIGARCH

   The command for estimating ARFIMA(p1,d1,q1)-FIGARCH(p2,d2,q2)  model has the following form
    arfimafigarch! (p1,q1,p2,q2) <variable>

    Options:
   &d1fix <number>
    fix differencing parameter for mean
   &d2fix <number>
    fix differencing parameter for volatility
   &hygarch
    add this option to estimate HyGARCH
   &distr <normal|tstud|skewt>
    distribution of disturbances
   &method <bfgsn|simplex|bhhhn|sa>
    numerical algorithm
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
     Example:
    arfimafigarch! (0,0,1,1) DowJ &d1 0 &distr tstud
    This is an example for FIGARCH(1,d,1) with innovations  distributed as Student t.
  
     Literature:
   Laurent, S. and  J.P.Peters, "G@RCH 2.2: an Ox Package for Estimating and Forecasting Various ARCH Models," Journal of Economic Surveys, 16 (2002, No.3), 447-485.

   See also
   Box-Jenkins model (ARIMA)
   Econometric models estimation

Descriptive statistics

   The command for calculating descriptive statistics  has the following form
    descript! <variable>
 
    Example:
     descript! x
 
   The procedure calculates standard descriptive statistics such as minimum, maximum, mean, median, standard deviation, asymmetry, excess kurtosis, 1st order autocorrelation, etc.

     See also
   Statistical procedures

Correlation matrix

   The command for calculating correlation matrix  has the following form
    corr! <list of variables>
 
    Options:
   &spearman
    calculate Spearman's rank correlations
   &kendall
    calculate Kendall's rank correlations (Kendall's tau)
 
    Examples:
     corr! x data[z]/1000 ln(y)
     corr! x1 x2 x3 &kendall

     See also
   Statistical procedures

Autocorrelation function

   The command for calculating autocorrelation function has the following form
    acf! <variable>

    Options:
   &nlags
    lag length
   &pacf
    calculate partial autocorrelation function
 
    Example:
     acf! inflation
     acf! ln(X)-ln(X[-1])  &nlags 30  &pacf
 
   This procedure calculates both autocorrelation function and partial autocorrelation function, as well as various statistics for them.
   By default the lag length is chosen to be n/5. (Not more then 150 values are shown in the table)

     See also
   Statistical procedures

Histogram

   The command for calculating histogram has the following form
    hist! <variable>
 
    Example:
     hist! x
 
   The command shows a density estimate (histogram) as a graph. A plot of kernel density estimate and a plot of normal density are also shown (for comparison).

     See also
   Statistical procedures

Spectral density

   The command for calculating spectral density has the  following form
    spectrum! <variable>
 
    Options:
   &window <parzen|hanning|hamming|daniell|quad|bartlett|trunc>
   window type, Parzen is the default
   &npoints
    number of intervals
   &bandwidth
    bandwidth (lag)
 
    Example:
     spectrum! y
 
   Spectral density is estimated using lag window. 

     See also
   Statistical procedures
   Spectrogram

Spectrogram

   The command for plotting spectral density has the following form
    spectrogram! <variable>
 
    Example:
     spectrogram! series[x]
 
   The command shows a spectral density estimate  (spectrogram, periodogram) as a graph.  Average line is also shown. The spectral density estimate is calculated for frequencies from 0 to 0.5 and is normalized in such a way that the integral of density over frequencies [0;1] is equal to 1 (so it is 1 on average). By default Parzen window is used for the estimation (see Spectral windows).

     See also
   Statistical procedures
   Spectral density

<Spectral windows>

   Types of lag spectral windows (x in [0;1]):
Parzen:
   1-6*x^2+6*x^3, x<=1/2,
   2*(1-x)^3, x>=1/2
Tukey-Hanning (Tukey-Hann):
   (1+cos(pi*x))/2
Tukey-Hamming:
   0.54+0.46*cos(pi*x)
Daniell:
   sin(pi*x)/(pi*x)
Quadratic Parzen:
   1-x^2
Bartlett (triangular):
   1-x
Truncated (rectangular):
   1
 

Normal PP-diagram

   Normal probability-probability diagram compares empirical cumulative distribution function with cumulative distribution function of normal distribution with the sample mean and variance. If the line is far from diagonal then the sample is not from normal distribution. Straight lines are 95% confidence bounds based on Lilliefors critical value for Kolmogorov-Smirnov statistic.
     See also
   Statistical procedures

Normal PP-diagram

   Normal probability-probability diagram compares empirical cumulative distribution function with cumulative distribution function of normal distribution with the sample mean and variance. If the line is far from diagonal then the sample is not from normal distribution. Dashed lines are confidence bounds based on Lilliefors quantiles for Kolmogorov-Smirnov statistic.
     See also
   Statistical procedures

Normal PP-diagram

   Normal probability-probability diagram compares empirical cumulative distribution function with cumulative distribution function of normal distribution with the sample mean and variance. If the line is far from diagonal then the sample is not from normal distribution. Dashed lines are based on Lilliefors confidence bounds for Kolmogorov-Smirnov statistic.
     See also
   Statistical procedures

Normal PP-diagram

   Normal probability-probability diagram compares empirical cumulative distribution function with cumulative distribution function of normal distribution with the sample mean and variance. If the line is far from diagonal then the sample is not from normal distribution.
     See also
   Statistical procedures

Normal PP plot


     See also
   Statistical procedures

Dickey-Fuller test (ADF)

   The command for calculating Dickey-Fuller statistic has the following form
    adftau! (<kind>,<difference>) <variable>
 
   "Kind":
    0  No constant
    1  Constant only
    2  Constant and trend
    3  Constant, trend and trend squared
 
   "Difference":
    0  Levels
    1  1-st differences
    and so on.
 
    Example:
     adftau! (2,0) gdp[gr_rate]
 
    Remark:
  To use this procedure you have to install  J.MacKinnon's files on your computer.

     See also
   Dickey-Fuller test summary table
   Dickey-Fuller test panel
   Statistical procedures

How to install tables for ADF

   Matrixer can calculate P-values for augmented Dickey-Fuller test. Actually, Matrixer subroutine for ADF is a shell for the files that are due to James MacKinnon (James G. MacKinnon, "Numerical Distribution Functions for Unit Root and Cointegration Tests," Journal of Applied Econometrics, 11, 1996, 601-618). The files are available from 
    qed.econ.queensu.ca/pub/faculty/mackinnon/numdist/ .
 
   One have to download the file tabs-dos.zip and unzip it into directory 
          "...\urcdist\"
(relative to Matrixer directory). Only probs.tab and urc-1.tab are needed.
 
   Matrixer P-values are not the same as those calculated by  the original MacKinnon's MS DOS program (urcdist.zip), but  the difference is not very large.

     See also
   Dickey-Fuller test (ADF)
   Dickey-Fuller test summary table
   Dickey-Fuller test panel

Dickey-Fuller test panel

   To invoke a panel for Dickey-Fuller test choose menu item
  Show > Dickey-Fuller test (ADF)

   At the panel you can set options for Dickey-Fuller test.
  "Constant and trend" determines whether intercept term and time trend must be included in ADF test.
  "Difference" determines whether the series must be differenced before doing ADF test.
        0  Levels
        1  1-st differences
        and so on.
  "Number of lags" sets the order of ADF test.
        0  DF (no lags)
        1  ADF(1) (augmented DF test of order 1)
        and so on.
  "AR(p)" sets the order of autocorrelation test.
  "Start" and "End" determine the start and the end of the series

   Press "Calculate" button (the one with black triangle) to see the results of the test. It will show results for both tau and z variants of ADF test.
   The null hypothesis for DF test is that the series has a unit root. If the test statistic is insignificant (say, significance level is greater than 5%) then the null  hypothesis should be accepted.
 
   Press "Summary" button to see summary of results for different orders of ADF test. In this case "Number of lags" sets the largest order of ADF test to be shown.
 
    Remark:
  To use this procedure you have to install  J.MacKinnon's files on your computer.

     See also
   Dickey-Fuller test (ADF)
   Dickey-Fuller test summary table

Dickey-Fuller test summary table

   Summary table for the ADF test shows results for different  orders of the test. The results could be used to select the most appropriate order (lag length). There are three main approaches to this problem. 

  One approach is to make the residuals of the  ADF test regression be close to white noise. This can be  tested using an autocorrelation test. Matrixer uses  Godfrey autocorrelation test.  If the test statistic is significant then the choice of lag is inappropriate.

  Another method is to start from some maximum lag  length and "test down" using t- or F-statistics for the  significance of the farthest lags. Process stops when t-statistic/F-statistic is significant.
 
  Also it is possible to use information criteria,  AIC and BIC. Lag length with minimal value of information criterion is preferable.
 
   The null hypothesis for DF test is that the series has a unit root. If the test statistic is insignificant (say, significance level is greater than 5%) then the null  hypothesis should be accepted.
 
    Remark:
  To use this procedure you have to install  J.MacKinnon's files on your computer.

     See also
   Dickey-Fuller test (ADF)
   Dickey-Fuller test panel

Estimation results

    After regression estimation user gets to a menu,  which allows viewing and analyzing the results.  Afterwards this menu can be called using Alt-R hot key.

            See the following topics
    Estimates and statistics
    Histogram of standardized residuals
    Outliers
    Influential observations
    Second order effects
    Variables deletion test
    Restrictions (functions of parameters)
    Diagnostics

     See also
   Econometric models estimation

Estimates and statistics

    The table shows estimation results for an econometric model. Information shown depends on the model. Below are comments, which relate primarily to linear  regression.
 
     In columns
    variable (or parameter) name,
    parameter estimate (coefficient of corresponding variable in a linear model),
    estimate of standard error for the parameter,
    t statistic for a hypothesis that parameter equals zero
    significance level (P-value) of the t statistics  in square brackets (if significance level is small, say,  less than 5%, then the variable is said to be statistically significant)
 
    R2  is the coefficient of determination (in percent).
    R2adj.  is the coefficient of determination  adjusted for degrees of freedom.
    AIC  is Akaike information criterion.
    BIC  is Bayesian information criterion.
    DW is Durbin-Watson statistics for first order autocorrelation in error term.
    F is Fisher statistic for the hypothesis that all parameters (excluding intercept) are zero. (If significance level in square brackets is small, say,  less than 5%, then regression as a whole is statistically significant. Note that F statistic is meaningless if the regression has no intercept term!)

    After that statistics for model specification  (diagnostic statistics) follow. Significance levels for these test statistics are shown in square brackets. If a statistic is insignificant (say, significance level is greater  than 5%) then the null of correct specification should be accepted.
  'Normality': see Normality
  'heteroskedasticity': see heteroskedasticity
  'Functional form': see Model functional form
  'AR(1) in error term': see  Autocorrelated errors
  'ARCH(1) in error term': see Autoregressive conditional heteroskedasticity in error term

     See also
   Estimation results

<Akaike information criterion>

    Akaike information criterion is an indicator, which is used for selecting one of several rival models. The definition is
 
   AIC = - 2 (ln(L) - k) / n        
 
where L is the value of loglikelihood function, k is the number of parameters in the model, n is the number of observations.
Of two models the model with smaller AIC is "preferred".

<Bayesian information criterion>

    Bayesian information criterion (also known as Schwarz information criterion) is an indicator, which is used for selecting one of several rival models. The definition is
 
   BIC = - (2 ln(L) - k ln(n)) / n        
 
where L is the value of loglikelihood function, k is the number of parameters in the model, n is the number of observations.
Of two models the model with smaller BIC is "preferred".

Diagnostics

    Normality
    heteroskedasticity
    Model functional form
    Autocorrelated errors
    Autoregressive conditional  heteroskedasticity in error term
 
    When the estimates for the model are obtained it is important to test whether the model was correctly specified. For this diagnostic test statistics are used.
 
    In specification testing the null hypothesis is always that the model is specified correctly and alternative hypothesis is that there is a specification error. If the test statistic is insignificant (say, significance level is greater than 5%)  then the null of correct specification should be accepted.

     See also
   Estimates and statistics
   Estimation results

Normality

    If distribution of regression errors is non-normal it do not necessarily lead to serious consequences (such as inconsistency). However, normality assumption is  important.
 
    First, fat-tailness or skewness of distribution of  regression errors may result in not very accurate  estimates. The use of so called robust estimation can increase efficiency of model estimates.
 
    Second, non-normality implies that calculated t and F statistics are not distributed as t and F in finite samples. Generally, these statistics are still consistent so their use is justified by the asymptotic theory. But under severe non-normality the asymptotic approximation may be very inaccurate in small samples.
 
    Non-normality of regression errors could be evident from the form of the histogram of residuals or from the plot of residuals. In the later case one should pay attention to outliers.
 
    A formal test for normality of regression errors  was proposed by Jarque and Bera (C.M.Jarque, A.K.Bera, "Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals," Economic Letters, 6 (1980), 255-9).  It is based on the third and forth moments.
 
    The test statistics is
   n * [1/6 * m(3)^2 / m(2)^3 + 1/24 * (m(4) / m(2)^2 - 3)^2]
where m(k) = Sum {1...n} (e(i)-mean(e))^k / n is the k-th  central moment of residuals. It is approximately distributed as chi-square with two degrees of freedom.
 
    One can also use outlier statistic to test normality.
 
   Remarks:
 
  In specification testing the null hypothesis is always that the model is specified correctly and alternative hypothesis is that there is a specification error. If the test statistic is insignificant (say, significance level is greater than 5%)  then the null of correct specification should be accepted.
  Non-normality of regression errors may be a result of ordinary heteroskedasticity or autoregressive conditional  heteroskedasticity.

     See also
   Diagnostics
   Estimates and statistics
   Estimation results

heteroskedasticity

    Heteroskedasticity means that variances of errors of different observations are different.
 
    There are many various kinds of heteroskedasticity. Consequently, a lot of different tests for heteroskedasticity could be invented. The easiest path is to check whether  there exists a functional relationship between error variance and regressors or between error variance and expected value of dependent variable.
 
    Suppose that there is a functional relationship between  error variance sigma2(i) and some variables Z(i). Then sigma2(i) (which is the same as expected squared error,  E[eps(i)^2]) is a function of Z(i). One of the most widely used versions uses fitted values as Z(i): Z(i) = X(i)b.
 
    To test the absence of functional relationship one can use an auxiliary regression of the form
       e(i)^2 = a1 + Z(i)a2.
Appropriate statistic is equal to coefficient of determination from the auxiliary  regression times the number of observations, which is approximately distributed as chi-square with p degrees of freedom under the null, where p is the number of variables in Z. Alternatively, one can use F test for the hypothesis that a2 = 0, which in this case has p and (n-1-p) degrees of freedom. Both versions are asymptotically equivalent.
 
    Another test use an auxiliary regression of the form
       e(i)^2/sigma2 - 1 = a1 + Z(i)a2. (*)
LM test statistic (Breusch-Pagan statistic) is calculated as half the explained sum of squares from this regression.  It is approximately distributed as chi-square with p degrees of freedom. This version is sensitive to departures from normality but may have more power.
 
    Initially (*) was proposed (L.G.Godfrey, "Testing for Multiplicative Heteroskedasticity," Journal of Econometrics, 8 (1978), 227-36; T.S.Breusch, A.R.Pagan, "A Simple Test for Heteroskedasticity and Random Coefficient Variation," Econometrica, 47 (1979), 1287-94).  The test was modified by Koenker (R.Koenker, "A Note on Studentizing a Test for Heteroskedasticity," Journal of Econometrics, 17 (1981), 107-12).
 
   Remark:
 
   In specification testing the null hypothesis is always that the model is specified correctly and alternative hypothesis is that there is a specification error. If the test statistic is insignificant (say, significance level is greater than 5%)  then the null of correct specification should be accepted.

     See also
   Diagnostics
   Autoregressive conditional heteroskedasticity in error term
   Estimates and statistics
   Estimation results

Model functional form

    If the data are generated according to the model   Y(i) = f(X(i)b) + eps(i), i = 1, ..., n,
where f(.) is a nonlinear function, but estimated regression model is linear,   Y(i) = X(i)b + e(i),
then residuals e(i) must contain unaccounted component, which would be a function of regressors, X(i). This may result in inconsistency of coefficients b.
 
    RESET test enables to reveal non-linearity of estimated function. This test is intended for testing significance of various powers of fitted values. The simplest version is the test of addition of squared fitted values. The test was proposed by Ramsay (J.B.Ramsay, "Tests for Specification Errors in Classical Linear Least Squares Regression Analysis," Journal of the Royal Statistical Society B, 31 (1969), 161-72).
 
    Let Fit(i) = X(i)b be the fitted values and   v(k) = (Fit(1)^k, ..., Fit(n)^k) be the vector of k-th  powers of fitted values. The regressors matrix X is augmented by columns v(2),...,v(p), where p is the order of the test. Formally the test of the null of adequate functional form is carried out as a test of hypothesis that coefficients of added variables are simultaneously zero.
 
   Let e be the vector of residuals, V = (v(2), ...,v(p)) be a matrix, composed of vectors v(k), MX = I - X inv(X' X) X' be a projection matrix. Corresponding statistic, which  could be calculated as
     n (e' V inv(V' MX V) V' e) / (e' e)
is distributed approximately as chi-square with (p - 1) degrees of freedom. Statistic
      (n - m - p + 1) / (p - 1) * (e' V inv(V' MX V) V' e) /
      (e' e - e' V inv(V' MX V) V' e)
is distributed approximately as Fisher F with (p - 1) and (n - m - p + 1) degrees of freedom where m is the number of  initial regressors.
 
Both versions are asymptotically equivalent.
 
   When testing functional form adequacy it may be worthwhile to pay attention to second order effects.
 
   Remark:
 
   In specification testing the null hypothesis is always that the model is specified correctly and alternative hypothesis is that there is a specification error. If the test statistic is insignificant (say, significance level is greater than 5%)  then the null of correct specification should be accepted.

     See also
   Diagnostics
   Estimates and statistics
   Estimation results

Autocorrelated errors

    Error autocorrelation (serial correlation) formally means that variance-covariance matrix of regression errors is not diagonal. If estimation does not take autocorrelation  into account then (at best) lost of efficiency follows (the estimates are less accurate than for estimation  techniques that take autocorrelation into account). Moreover if there are lags of dependent variables among regressors then autocorrelation leads to inconsistency of OLS estimates. Also OLS estimates may be inconsistent when errors are nonstationary, e.g. if they are generated by random walk process.
 
    Error autocorrelation could be revealed by analyzing residuals. Autocorrelated errors show themselves in  autocorrelated residuals. So, it might be useful to examine ACF or spectrum of residuals or just a plot of residuals.
 
    The most well known formal test is Durbin-Watson (DW) test. If Durbin-Watson statistics is close to 0 then there is a positive autocorrelation. It is desirable that Durbin-Watson  statistics be around 2.
 
    The program also shows (as "AR(1) in error term") the test statistic, which is due to Godfrey (L.G.Godfrey, "Testing against General Autoregressive and Moving-Average Error Models When Regressors Include Lagged Dependent Variables," Econometrica, 46 (1978), 1293-1302). Unlike Durbin-Watson this test is applicable even if there are lags of dependent variable among the regressors.
 
    Let e(t) = Y(t) - X(t)b  be the residuals from the regression to be tested. Denote
     e[-k] = (0,...,0, e(1), ..., e(n-k))'
where n is the number of observations. The matrix of regressors, X, is augmented with rows e[-1], ... ,e[-p] (lags of residuals) where p is the order of the test. Formally testing the null of no autocorrelation is testing that the coefficients of added variables are simultaneously zero.
 
    Let e be the vector of residuals, E be the matrix  combined from lags of residuals, E = (e[-1], ... ,e[-p]), and.
      MX = I - X inv(X'X) X'
be a projection matrix. Then statistic, which  could be calculated as
     n (e' E inv(E' MX E) E' e) / (e' e)
is distributed approximately as chi-square with p degrees of freedom. Statistic
      (n - m - p) / p * (e' E inv(E' MX E) E' e) /
      (e' e - e' E inv(E' MX E) E' e)
is distributed approximately as Fisher F with p and (n - m - p) degrees of freedom where m is the number of  initial regressors.
 
Both versions are asymptotically equivalent.
 
   Remark:
 
   In specification testing the null hypothesis is always that the model is specified correctly and alternative hypothesis is that there is a specification error. If the test statistic is insignificant (say, significance level is greater than 5%)  then the null of correct specification should be accepted.

     See also
   Diagnostics
   Estimates and statistics
   Estimation results

Autoregressive conditional heteroskedasticity in error term

   An example of autoregressive conditional heteroskedasticity  is the effect of volatility clustering in some financial time series.
 

     See also
   Diagnostics
   Estimates and statistics
   Estimation results
   GARCH regression

Variables deletion test

    F statistic for the hypothesis that coefficients of marked variables are simultaneously equal to zero is given (in the models with nonlinear functions the hypothesis is that marked parameters are simultaneously equal to zero).

     See also
   Estimation results
   Restrictions (functions of parameters)

Outliers

   The plot shows F statistic for outliers (anomalous observations). This is the test statistic of adding a dummy, which is 1 for some specific observation and 0 elsewhere. Outliers are characterized by a large value of F statistic.

     See also
   Estimation results
   Normality
   Influential observations

Influential observations

   The plot shows leverage measure (DFFITS). The straight line on the plot goes at a level of  4 means of the measure. The leverage measure is  several times higher than the mean for the observations, which are highly influential.

     See also
   Estimation results
   Outliers

Influential observations

   The plot shows leverage measure (DFFITS). The straight line on the plot goes at a level of  4 means of the measure. The leverage measure is  several times higher than the mean for the observations, which are highly influential.

     See also
   Estimation results
   Outliers

Second order effects

    This procedure allows to calculate automatically t statistics for variable addition test for the variables, which are cross-products of regressors. This test could be used for testing and/or choosing of the functional form of  regression model.

     See also
   Estimation results
   Model functional form

Histogram of standardized residuals

   Histogram allows to picture the form of error density  and to judge visually if the distribution is close to normal.
   The graph contains not only the histogram, but also the normal density with the same mean and variance and a plot of kernel density estimate.

     See also
   Estimation results
   Normality
   Histogram

Restrictions (functions of parameters)

    This procedure allows to calculate functions of parameters  estimates and to test a hypothesis that parameters satisfies a set of nonlinear restrictions (Wald test).
 
    Each restriction is specified as an expression (formula), which is tested to be equal zero.
    In linear regression an similar models the parameters are denoted as  %1%2, etc.
    In nonlinear regression and other models with nonlinear functions the same names are used as in corresponding formulas.
 
      Example:
         %1+%2
         %3-1

     See also
   Estimation results
   Variables deletion test
   Parameters

Forecasting

   Dynamic forecast is available only for  ARIMA model.
 
   Fitted values stored in variable  \Fitted  give static forecast for most regression models.

     See also
   Estimation results
   Model matrices

Nonlinear function minimization

    Command for minimization of a nonlinear function has the following form
    min! <formula>
    The names of function parameters begin with  % character.

    Options:
   &method <newton|bfgsa|bfgsn|simplex|sa>
    numerical algorithm
   &start <variable>
    vector of initial values for parameters
   &deltas <variable>
    vector of parameters precision
   &precision <number>
    overall precision (convergence parameter)
   &maxstep <positive integer number>
    maximal number of iterations
 
    Default numerical algorithm is Newton.  Other classical algorithms are also available such as simplex  method, etc.
   Example:
    min! 100 * sqr(%x2 - sqr(%x1)) + sqr(1 - %x1)

    This is an example for the well-known Rosenbrock function.

   See also
   Econometric models estimation

Macros (blocks of commands)

    Macros are used for running a group of commands. Actually  they are programs written in an internal programming language  of the program Matrixer. Language is not very sophisticated, but it allows to automate routine operations.
 
    Most commands in a macro have the same  format as commands, which are started from command window. A simple macro (which is just a group of  such commands) produces almost the same output as those commands started one after another from the command window.  There are also special control commands, which govern flow of macros and are used only in macros.
    It is possible to put several commands in one line or one command in several lines. Every command must  finish with ; character.
    Macro may contain comments:
  After // all other text in a line is treated as comment.
  Text from (* to *) and from /*  to */ is treated as comment.
 
            See the following topics
     Menu of macros
     Files of macros (command files)
     Macro editor
     Control commands in macros
     Messages and signals in macros

     See also
   Commands
   Command window

Menu of macros

    Menu of macros is intended for handling files of macros. While in this menu it is possible to create, delete, rename, copy, edit or run a macro.
    In menu of macros all files of macros are listed, which are in the current working directory.
    Menu of macros could be called from menu of matrices (variables) by pressing Alt-B hot key.
    To edit an existing macro press ENTER or  double-click it using mouse.
    To run an existing macro press Shift- ENTER.
 
         Other hot keys:
    INS  create;     DEL  delete;
    Alt-N  rename;   Alt-C  copy;

     See also
   Macros (blocks of commands)

Files of macros (command files)

    Files of macros has extension .bch. They are ordinary text (human-readable) files, which could be opened in any text editor. Besides, Matrixer has an internal  macro editor. To start working with macros call Menu of macros.
 

     See also
   Macros (blocks of commands)

Control commands in macros

    In addition to commands, which could be started from command window, there are also special  control commands, which govern  flow of macros and are used only in macros.
 
  exit!;
stop macros
  label! <label name>;
label declaration
  goto! <label name>;
unconditional jump to label
  goto! <label name> <condition (scalar expression)>;
conditional jump to label (on condition that  the scalar expression is positive ("true") ).
 
     Example:
     @i:=1; @n:=30;  @f:=1;
     label! cat;
     @f:=@f*@i; @i:=@i+1;  beep!;
     ask! " Step " @i-1 ". Press ESC to stop";
     text! "   Step " @i-1;
     goto! cat @n-@i+1;
     wait! @n "! = " @f
 
    Also macros may contain if statement and  for or loop looping statements.

     See also
   Macros (blocks of commands)
   Messages and signals in macros

If statement

    If statement:
 
     if! <condition> ;
     <Commands to execute if condition is true>
     else!;
     <Commands to execute if condition is false>
     endif!;
 
    If statement with additional conditions:
 
     if! <condition1> ;
     <Commands to execute if condition 1 is true>
     nextif! <condition2> ;
     <Commands to execute if condition 1 is false and condition 2 is true>
     else!;
     <Commands to execute if conditions 1 and 3 are false>
     endif!;
 
    Nextif part could be repeated several times with  different conditions.
 

     See also
   Macros (blocks of commands)
   Control commands in macros

Looping in macros

    For cycle:
 
     for!  <scalar> <initial value> <final value> ;
     <Commands>
     endfor!;
 
    Loop cycle:
 
     loop!;
     <Commands>
     endloop!;
 
    Body of a cycle could contain break and  continue commands :
 
     break! ;
     break! <condition> ;
     continue! ;
     continue! <condition> ;
 
    Command break is intended for terminating looping, and command continue is intended for  terminating the current iteration and starting  the next iteration.
 
   Examples:
 
     @sum:=0;
     for! @i 1 100;
       @sum:=@sum+@i;
     endfor!;
     wait! @sum;
        calculates sum of the numbers from 1 to 100
 
     @n:=0;
     @sum:=0;
     loop!;
       @n:=@n+1;
       @sum:=@sum+~u01;
       break! @sum>50;
     endloop!;
     wait! @n;
        sums random numbers from uniform distribution  U[0,1] and displays the number of items when  the sum exceeds 50.
 
   Remarks:
  The body of loop cycle commonly must contain break command to terminate it.
  If break command is at the beginning  of the cycle body then the construction is much the same as while cycle in Pascal.
  If break command is at the end  of the cycle body then the construction is much the same as repeat-until cycle in Pascal.

     See also
   Macros (blocks of commands)
   Control commands in macros

Messages and signals in macros

    Messages and signals:
  beep!;
  beep sound
  text! <string expression>;
  displays a message string; this command could be  used for tracing execution of macro without suspending  the flow of macro
  wait! <string expression>;
  displays a message and suspends the flow of macro  until button is pressed
  ask! <string expression>;
  makes a query whether to stop execution of macro; stops execution of macro if Cancel button is pressed
    About string expressions see Strings.
 
  starttimer!;
  starts timer.
  showtimer!;
  shows current value of time counter in seconds (the value of @timer scalar).

     See also
   Macros (blocks of commands)
   Control commands in macros

Macro editor

    Macro editor is intended for editing and executing macros. It could be accessed from menu of macros.
    Only one macro file could be opened at once.
 
  Remark:
  Double-click symbol or word to see  a hint for it.

    Hot keys:
       Shift-ENTER  run macro,
       Ctrl-S  save macro,
       ESC  exit macro editor.

      See also
   Macros (blocks of commands)

Commands

    A command could be executed either from the command window or as one of the commands in a macro.
 
            See the following topics
     Assignment commands
     Graphs
     Econometric models estimation
     Statistical procedures
     Control commands in macros
     Messages and signals in macros
     Import data
     Matrix decompositions
     Other statistical and mathematical commands
     Other commands
 

     See also
   Macros (blocks of commands)
   Command window

Import

   The command for data import has the following form
    import! <matrix name> <file name>
or
    import! <matrix name> clipboard
(to import from clipboard)
 
    Options:
   &fromline <integer number>
    starting line
   &toline <integer number>
    terminal line
    0 (to end) is default
   &fixedwidth <0|1>
    presume columns of fixed width
    0 is default
   &similarity <number>
    threshold for rows similarity (fraction, used with &fixedwidth 1)
    0.8 is default
   &ignorenonnumeric <0|1>
    ignore nonumeric rows (used with &fixedwidth 1)
    1 is default
   &textincomments <0|1>
    store text as comments
    1 is default
   &linenumber <0|1>
    store line number
    0 is default
   &sep <tab|comma|semicolon|vbar|slash|<any symbol>>
    separators
    tab, comma and semicolon are defaults
   &space <tab|comma|space|<any symbol>>
    space symbols
    space is default
   &eol <crcrlf|crlf|cr|lf|<any symbol>>
    end-of-line symbols
    crcrlf, crlf, cr and lf are defaults
   &quote <single|double|<any symbol>>
    quotes
    single and double are defaults
   &dpoint <point|comma>
    decimal point
    point and comma are defaults
   &clearrows <0|1>
    clear rows
    1 is default
   &clearcolumns <0|1>
    clear columns
    1 is default
   &rowsclearing <number>
    fraction of nonnumeric data in rows (used with &clearrows 1)
    1 is default
   &columnsclearing <number>
    fraction of nonnumeric data in columns (used with &clearcolumns 1)
    1 is default
   &iterateclearing <0|1>
    iterate clearing rows and columns
    1 is default
   &sepeol <0|1>
    treat any separator as end of line (for "by variable" data format)
    0 is default
   &slicelen <integer number>
    length of variables (for "by variable" data format, used with &sepeol 1)
    0 (no slicing) is default
   &numsub <text>=<number>
    substitution "Text->Number"
   &strsub <substring1>=<substring2>
    substitution "Substring1->Substring2"
 
     Example:
    import! DATA C:\Docs\data.txt
    &fixedwidth 1
    &rowsclearing 0.8
    &sep
    &dpoint point
    &numsub I=1
    &numsub II=2
    &numsub -9999=8934567
    &strsub ,=
    &strsub D=E

     See also
   Quick start: How to import text file
   Commands
   Other commands

Graphs

        Histogram:
    hist! <variable>
     Example:  hist! x
   Hot key: Alt-H
 
        Plot by observation number:
    plot! <list of variables>
     Example:  plot! x y
   Hot key: Alt-G
 
        Plot by time:
    timeplot! <list of variables>
     Example:  timeplot! x y
 
        XY-plot (line):
    xyplot! <X-axis variable>  <list of Y-axis  variables>
     Example:  xyplot! x y1 y2
   Hot key: Alt-Y
 
        Scatter diagram ("stars"):
    scatter! <X-axis variable>  <list of Y-axis  variables>
     Example:  scatter! x y1 y2
   Hot key: Alt-S
 
    The general command (optional parameters are shown in square brackets):
 
    plot! [(<kind of X axis>,<kind of Y axis>)] [<X-axis  variable>,] <list of Y-axis variables>
 
    Kind of axis could be either ordinary linear (lin) or logarithmic (log). If this parameter is absent then both axes will be ordinary by default.
    If the first variable is followed by colon character then the variable will be treated as X-axis variable. Otherwise observation number will be used as  X-axis variable.
   Each variable in the list of Y-axis variables could be  followed by -* and | characters in any combination.
   - character means line.
   * character means "star".
   | character means "bar".
    If those characters are absent then plot points will be joined by line.
 
        fplot!
        graph!
        plot3d!
 

     See also
   Commands
   Command window
   Macros (blocks of commands)

Example A

   Example (imitate histogram):

@nbins := 10;
@min := minel(x);
@max := maxel(x);
@range := @max-@min;
@min := @min-@range/100;
@max := @max+@range/100;
@range := (@max-@min)/@nbins;
histogr == zerosvec(@nbins);
for! @i 1 rows(x);
  @bin := int((x@(@i,1)-@min)/@range)+1;
  histogr@(@bin,1) := histogr@(@bin,1)+1;
endfor!;
for! @bin 1 @nbins;
  histogr@(@bin,2) := @min+@range*(@bin-0.5);
endfor!;
xyplot! histogr[2] histogr[1]|;

Example B

  Example:
Marsaglia, Bray (1964)

@n := 1000;
x == onesvec(@n);
for! @i 1 div(@n,2);
  loop!;
    @v1 := 2*~u01-1;
    @v2 := 2*~u01-1;
    @r := sqr(@v1)+sqr(@v2);
    break! @r<1;
  endloop!;
  @f := sqrt(-2*ln(@r)/@r);
  @norm1 := @v1*@f;
  @norm2 := @v2*@f;
  @i2 := @i*2;
  x@(@i2-1,1) := @norm1;
  if! @i2<=@n;
    x@(@i2,1) := @norm2;
  endif!;
endfor!;
hist! x;

Example C

    Cox's "partial likelihood" method for estimating proportional hazard  model.
    The approach allows to drop out "baseline" hazard function and to  estimate dependence of duration on regressors only. The intercept is also  dropped out as it is a multiplier for "baseline" hazard.
 
  Literature:
  Cox, D.R. "Partial Likelihood," Biometrika, 62 (1975), 269-276.
 
Sorted == (Data[Y] Data[X1] Data[X2]) ? Data[Y];
    All variables are sorted so that durations go in ascending order.
Sorted == clearrows(Sorted);
    Clear possible missing values
namevars! Sorted Y X1 X2;
mle! ln(fitted)-ln($csum(fitted)) \TT {>> fitted = exp(%a1*Sorted[X1]+%a2*Sorted[X2]);

Example D

   Quasi maximum likelihood estimation of autoregressive stochastic  volatility model
 
#y == clearrows(RTS[d]);
#y == sqr(#y);
#y == ln(#y+mean(#y)*0.02)
- mean(#y)*0.02/(#y+mean(#y)*0.02)+1.27;
#y == #y - mean(#y);
#start == 0.98|3|0.03;
mle! if($i>1,-0.5*(ln(2*@pi*f)+sqr(v)/f),@na)
>> h1 = if($i>1,%phi*$lag(h1+P1*v/f),0)
>> P1 = if($i>1,sqr(%phi)*$lag(P)+%S22,%S22/(1-sqr(%phi)))
>> f = P1+%S12
>> v = #y-h1
>> h = h1+P1*v/f
>> P = P1-sqr(P1)/f
&start #start &method simplex;
#p==flipv(\def_p);
#p1==flipv(\def_p1);
#h==flipv(\def_h);
#h1==flipv(\def_h1);
delete! #h_smooth;
@phi := \thetas@(1,1);
#h_smooth := if($i>1,#h+@phi*#p/$lag(#p1)*($l1-$lag(#h1)),#h);
#h_smooth == flipv(#h_smooth);


~u01 (random number)

~u01 
Uniform distribution
Generates random numbers from uniform U[0,1] distribution
 
    Section >>

~n01 (random number)

~n01 
Standard normal distribution
Generates random numbers from standard normal distribution, N(0,1)
 
    Section >>

~ev1 (random number)

~ev1 
Type 1 extreme value distribution (Gumbel distribution)
Generates random numbers from standard type 1 extreme value distribution
 
    Section >>

~logn01 (random number)

~logn01 
Standard lognormal distribution
Generates random numbers from standard lognormal distribution
 
    Section >>

~exp (random number)

~exp 
Standard exponential distribution
Generates random numbers from standard exponential distribution
 
    Section >>

ln (ordinary function)

ln 
Natural logarithm (logarithm to base e)
 
    Section >>

exp (ordinary function)

exp 
Exponential function
 
    Section >>

sqrt (ordinary function)

sqrt 
Square root
 
    Section >>

sqr (ordinary function)

sqr 
Square
 
    Section >>

sin (ordinary function)

sin 
Sine
 
    Section >>

cos (ordinary function)

cos 
Cosine
 
    Section >>

abs (ordinary function)

abs 
Absolute value
 
    Section >>

lg (ordinary function)

lg 
Denary logarithm (logarithm to base 10)
 
    Section >>

power10 (ordinary function)

power10 
Power of 10
 
    Section >>

tan (ordinary function)

tan 
Tangent
 
    Section >>

arctan (ordinary function)

arctan 
Arc tangent
 
    Section >>

arcsin (ordinary function)

arcsin 
Arc sine
 
    Section >>

arccos (ordinary function)

arccos 
Arc cosine
 
    Section >>

lngamma (ordinary function)

lngamma 
Natural logarithm of gamma function
 
    Section >>

gamma (ordinary function)

gamma 
Gamma function
 
    Section >>

digamma (ordinary function)

digamma 
Digamma function (Psi function) (the first derivative of natural logarithm of gamma function)
 
    Section >>

trigamma (ordinary function)

trigamma 
Trigamma function (the second derivative of natural logarithm of gamma function)
 
    Section >>

int (ordinary function)

int 
Integer part of a number
int(x). The greatest integer less than or equal to x
 
    Section >>

round (ordinary function)

round 
Rounding
round(x). Rounds x to the nearest integer
 
    Section >>

frac (ordinary function)

frac 
Fractional part of a number
frac(x). frac(x)=x-int(x)
 
    Section >>

sgn (ordinary function)

sgn 
Sign of a number
sgn(x)
sgn(x)=-1 for x<0
sgn(x)=0 for x=0
sgn(x)=1 for x>0
 
    Section >>

~t (ordinary function)

~t 
t distribution
~t(df). Generates random numbers from Student t distribution with df degrees of freedom
 
    Section >>

~chisq (ordinary function)

~chisq 
Chi-square distribution
~chisq(df). Generates random numbers from chi-square distribution with df degrees of freedom
 
    Section >>

~sgamma (ordinary function)

~sgamma 
Standard gamma distribution
~sgamma(lambda). Generates random numbers from gamma distribution with scale parameter 1 and shape parameter lambda
 
    Section >>

~ev3 (ordinary function)

~ev3 
Type 3 extreme value distribution (Weibull distribution)
~ev3(gamma). Generates random numbers from  standard type 3 extreme value distribution with scale parameter 1 and shape parameter gamma
 
    Section >>

~poi (ordinary function)

~poi 
Poisson distribution
~poi(mu). Generates random numbers from Poisson distribution with parameter mu
 
    Section >>

n01cdf (ordinary function)

n01cdf 
CDF of standard normal distribution
n01cdf(x). Returns the value of cumulative distribution function at point x for standard normal distribution, N(0,1)
 
    Section >>

lnn01cdf (ordinary function)

lnn01cdf 
Logarithm of CDF of standard normal distribution
lnn01cdf(x). Returns the value of logarithm of cumulative distribution function at point x for standard normal distribution, N(0,1)
 
    Section >>

n01invcdf (ordinary function)

n01invcdf 
Inverse distribution function: standard normal distribution
n01invcdf(p). Returns the value of inverse distribution function for probability p for standard normal distribution, N(0,1)
 
    Section >>

n01den (ordinary function)

n01den 
Density of standard normal distribution
n01den(x). Returns the value of probability density function at point x for standard normal distribution, N(0,1)
 
    Section >>

lnn01den (ordinary function)

lnn01den 
Logarithm of density of standard normal distribution
lnn01den(x). Returns the value of logarithm of probability density function at point x for standard normal distribution, N(0,1)
 
    Section >>

i (ordinary function)

i 
Indicator function
i(x)=1 for x>0
i(x)=0 for x<=0
 
    Section >>

not (ordinary function)

not 
Logical negation
not(x)=0 for x>0;
not(x)=1 for x<=0
 
    Section >>

lnrel (ordinary function)

lnrel 
Natural logarithm of 1+x
lnrel(x)=ln(1+x)
 
    Section >>

expm1 (ordinary function)

expm1 
Exponent minus 1
expm1(x)=exp(x)-1
 
    Section >>

logisticcdf (ordinary function)

logisticcdf 
CDF of logistic distribution
logisticcdf(x). Returns the value of cumulative distribution function at point x for logistic distribution
 
    Section >>

lnlogisticcdf (ordinary function)

lnlogisticcdf 
Logarithm of CDF of logistic distribution
lnlogisticcdf(x). Returns the value of logarithm of cumulative distribution function at point x for logistic distribution
 
    Section >>

logisticinvcdf (ordinary function)

logisticinvcdf 
Inverse distribution function: logistic distribution
logisticinvcdf(p). Returns the value of inverse distribution function for probability p for logistic distribution
 
    Section >>

logisticden (ordinary function)

logisticden 
Density of logistic distribution
logisticden(x). Returns the value of probability density function at point x for logistic distribution
 
    Section >>

lnlogisticden (ordinary function)

lnlogisticden 
Logarithm of density of logistic distribution
lnlogisticden(x). Returns the value of logarithm of probability density function at point x for logistic distribution
 
    Section >>

power (ordinary function)

power 
Power function
power(x,y)=x^y
 
    Section >>

boxcox (ordinary function)

boxcox 
Box-Cox transformation
boxcox(x,y)=(x^y-1)/y
 
    Section >>

max (ordinary function)

max 
Maximum of two numbers
max(x,y). Returns maximum of x and y
 
    Section >>

min (ordinary function)

min 
Minimum of two numbers
min(x,y). Returns minimum of x and y
 
    Section >>

roundd (ordinary function)

roundd 
Rounding
roundd(x,d). Rounds x. d is the number of position at which to round x.
 
    Section >>

div (ordinary function)

div 
Integer division
div(x,y). Returns the integer part of x/y
 
    Section >>

mod (ordinary function)

mod 
Remainder on division
mod(x,y). Returns the remainder on division of x by y.
  mod(x,y)=x-div(x,y)*y
 
    Section >>

eq (ordinary function)

eq 
Equality indicator
eq(x,y)=1 for x=y,
eq(x,y)=0 for x<>y
 
    Section >>

neq (ordinary function)

neq 
Inequality indicator
neq(x,y)=1 for x<>y,
neq(x,y)=0 for x=y
 
    Section >>

lt (ordinary function)

lt 
Indicator "less then"
lt(x,y)=1 for x<y,
lt(x,y)=0 for x>=y
 
    Section >>

gt (ordinary function)

gt 
Indicator "greater then"
gt(x,y)=1 for x>y,
gt(x,y)=0 for x<=y
 
    Section >>

le (ordinary function)

le 
Indicator "less then or equal"
le(x,y)=1 for x<=y,
le(x,y)=0 for x>y
 
    Section >>

ge (ordinary function)

ge 
Indicator "greater then or equal"
ge(x,y)=1 for x>=y,
ge(x,y)=0 for x<y
 
    Section >>

or (ordinary function)

or 
Logical "or"
or(x,y)=1 for x>0 or y>0,
or(x,y)=0 otherwise
 
    Section >>

xor (ordinary function)

xor 
Logical "xor" (exclusive "or")
xor(x,y)=1 for (x>0 and y<=0) or (y>0 and x<=0);
xor(x,y)=0 otherwise
 
    Section >>

and (ordinary function)

and 
Logical "and"
and(x,y)=1 for x>0 and y>0,
and(x,y)=0 otherwise
 
    Section >>

~beta (ordinary function)

~beta 
Beta distribution
~beta(a,b). Generates random numbers from beta distribution with parameters a and b
 
    Section >>

~gamma (ordinary function)

~gamma 
Gamma distribution
~gamma(alpha,lambda). Generates random numbers from gamma distribution with scale parameter alpha and shape parameter lambda
 
    Section >>

~f (ordinary function)

~f 
F distribution
~f(df1,df2). Generates random numbers from F distribution (Fisher) with df1 and df2 degrees of freedom
 
    Section >>

~bin (ordinary function)

~bin 
Binomial distribution
~bin(p,n). Generates random numbers from binomial distribution with parameters p (probability) and n (number of trials)
 
    Section >>

lnbeta (ordinary function)

lnbeta 
Natural logarithm of beta function
lnbeta(a,b). Returns the value of logarithm of beta function at (a, b)
 
    Section >>

chisqsign (ordinary function)

chisqsign 
Significance level: chi-square distribution
chisqsign(x,df). Returns the value of one minus cumulative distribution function at point x for chi-square distribution with df degrees of freedom
 
    Section >>

chisqcdf (ordinary function)

chisqcdf 
Distribution function: chi-square distribution
chisqcdf(x,df). Returns the value of cumulative distribution function at point x for chi-square distribution with df degrees of freedom
 
    Section >>

chisqinvcdf (ordinary function)

chisqinvcdf 
Inverse distribution function: chi-square distribution
chisqinvcdf(p,df). Returns the value of inverse distribution function for probability p for chi-square distribution with df degrees of freedom
 
    Section >>

chisqden (ordinary function)

chisqden 
Density of chi-square distribution
chisqden(x,df). Returns the value of probability density function at point x for chi-square distribution with df degrees of freedom
 
    Section >>

tsign (ordinary function)

tsign 
Significance level: t distribution
tsign(t,df). Returns two-sided significance level at point t for Student t distribution with df degrees of freedom
 
    Section >>

tcdf (ordinary function)

tcdf 
Distribution function: t distribution
tcdf(x,df). Returns the value of cumulative distribution function at point x for Student t distribution with df degrees of freedom
 
    Section >>

tinvcdf (ordinary function)

tinvcdf 
Inverse distribution function: t distribution
tinvcdf(p,df). Returns the value of inverse distribution function for probability p for Student t distribution with df degrees of freedom
 
    Section >>

tden (ordinary function)

tden 
Density of t distribution
tden(x,df). Returns the value of probability density function at point x for Student t distribution with df degrees of freedom
 
    Section >>

lntden (ordinary function)

lntden 
Logarithm of density of t distribution
lntden(x,df). Returns the value of logarithm of probability density function at point x for Student t distribution with df degrees of freedom
 
    Section >>

gedcdf (ordinary function)

gedcdf 
Distribution function: generalized error distribution
gedcdf(x,nu). Returns the value of cumulative distribution function at point x for generalized error distribution (GED) with shape parameter nu
 
    Section >>

gedden (ordinary function)

gedden 
Density of generalized error distribution
gedden(ged,nu). Returns the value of probability density function at point x for generalized error distribution (GED) with shape parameter nu
 
    Section >>

lngedden (ordinary function)

lngedden 
Logarithm of density of generalized error distribution
lngedden(x,nu). Returns the value of logarithm of probability density function at point x for generalized error distribution (GED) with shape parameter nu
 
    Section >>

if (ordinary function)

if 
Logical choice
if(c,x,y)
  if(c,x,y) = x for c > 0,
  if(c,x,y) = y otherwise
 
    Section >>

adfcdf (ordinary function)

adfcdf 
Augmented Dickey-Fuller test (ADF)
adfcdf(t,type,n).
t is Dickey-Fuller tau statistics
type: 0 (no constant), 1 (constant only),
3 (constant and trend),
4 (constant trend and trend squared)
n is number of observations
 
    Section >>

normden (ordinary function)

normden 
Density of normal distribution
normden(x,a,s2). Returns the value of probability density function at point x for normal distribution with parameters a and s2, N(a,s2)
 
    Section >>

lnnormden (ordinary function)

lnnormden 
Logarithm of density of normal distribution
lnnormden(x,a,s2). Returns the value of logarithm of probability density function at point x for normal distribution with parameters a and s2, N(a,s2)
 
    Section >>

fsign (ordinary function)

fsign 
Significance level: F distribution
fsign(f,df1,df2). Returns significance  level (area of right tail) at point f for Fisher F distribution with df1 and df2 degrees of freedom
 
    Section >>

fcdf (ordinary function)

fcdf 
Distribution function: F distribution
fcdf(f,df1,df2). Returns the value of cumulative distribution function at point f for Fisher F distribution with df1 and df2 degrees of freedom
 
    Section >>

finvcdf (ordinary function)

finvcdf 
Inverse distribution function: F distribution
finvcdf(p,df1,df2). Returns the value of inverse distribution function for probability p for Fisher F distribution with df1 and df2 degrees of freedom
 
    Section >>

fden (ordinary function)

fden 
Density of F distribution
fden(f,df1,df2). Returns the value of probability density function at point f for Fisher F distribution with df1 and df2 degrees of freedom
 
    Section >>

gammacdf (ordinary function)

gammacdf 
Distribution function: gamma distribution (incomplete gamma function)
gammacdf(x,alpha,lambda). Returns the value of cumulative distribution function at point x for gamma distribution with scale parameter alpha and shape parameter lambda (the value of incomplete gamma function)
 
    Section >>

gammaden (ordinary function)

gammaden 
Density of gamma distribution
gammaden(x,alpha,lambda). Returns the value of probability density function at point x for gamma distribution with scale parameter alpha and shape parameter lambda
 
    Section >>

lngammaden (ordinary function)

lngammaden 
Logarithm of density of gamma distribution
lngammaden(x,alpha,lambda). Returns the value of logarithm of probability density function at point x for gamma distribution with scale parameter alpha and shape parameter lambda
 
    Section >>

betacdf (ordinary function)

betacdf 
Distribution function: beta distribution (incomplete beta function)
betacdf(x,a,b). Returns the value of cumulative distribution function at point x for beta distribution with parameters a and b (the value of incomplete beta function)
 
    Section >>

betaden (ordinary function)

betaden 
Density of beta distribution
betaden(x,a,b). Returns the value of probability density function at point x for beta distribution with parameters a and b
 
    Section >>

lnbetaden (ordinary function)

lnbetaden 
Logarithm of density of beta distribution
lnbetaden(x,a,b). Returns the value of logarithm of probability density function at point x for beta distribution with parameters a and b
 
    Section >>

betainvcdf (ordinary function)

betainvcdf 
Inverse distribution function: beta distribution
betainvcdf(p,a,b). Returns the value of inverse distribution function for probability p for beta distribution with parameters a and b
 
    Section >>

nctcdf (ordinary function)

nctcdf 
Distribution function: non-central t distribution
nctcdf(x,df,delta). Returns the value of cumulative distribution function at point x for non-central t distribution with df degrees of freedom and non-centrality parameter delta
 
    Section >>

lnnctcdf (ordinary function)

lnnctcdf 
Logarithm of distribution function: non-central t distribution
lnnctcdf(x,df,delta). Returns the value of logarithm of cumulative distribution function at point x for non-central t distribution with df degrees of freedom and non-centrality parameter delta

lnnctden (ordinary function)

lnnctden 
Logarithm of density of noncentral t distribution
lnnctden(x,df,delta). Returns the value of logarithm of probability density function at point x for noncentral t distribution with df degrees of freedom and non-centrality parameter delta
 
    Section >>

nctden (ordinary function)

nctden 
Density of noncentral t distribution
nctden(x,df,delta). Returns the value of probability density function at point x for noncentral t distribution with df degrees of freedom and non-centrality parameter delta
 
    Section >>

nctmean (ordinary function)

nctmean 
Mean of noncentral t distribution
nctmean(df,delta). Returns the value of mean for noncentral t distribution with df degrees of freedom and non-centrality parameter delta
 
    Section >>

nctvar (ordinary function)

nctvar 
Variance of noncentral t distribution
nctvar(df,delta). Returns the value of variance for noncentral t distribution with df degrees of freedom and non-centrality parameter delta
 
    Section >>

anctcdf (ordinary function)

anctcdf 
 
anctcdf(x,df,delta). Returns the value of cumulative distribution function at point x for non-central t distribution with df degrees of freedom and non-centrality parameter delta

lnanctden (ordinary function)

lnanctden 
Logarithm of density of noncentral t distribution
lnanctden(x,df,delta). Returns the value of logarithm of probability density function at point x for noncentral t distribution with df degrees of freedom and non-centrality parameter delta

anctden (ordinary function)

anctden 
Density of noncentral t distribution
anctden(x,df,delta). Returns the value of probability density function at point x for noncentral t distribution with df degrees of freedom and non-centrality parameter delta

lnskewtden (ordinary function)

lnskewtden 
Logarithm of density of skewed t distribution
lnskewtden(x,df,lambda). Returns the value of logarithm of probability density function at point x for skewed Student t distribution with df degrees of freedom and "skewness" parameter lambda (with mean 0 and variance 1)

skewtden (ordinary function)

skewtden 
Density of skewed t distribution
skewtden(x,df,lambda). Returns the value of probability density function at point x for skewed Student t distribution with df degrees of freedom and "skewness" parameter lambda (with mean 0 and variance 1)

skewtcdf (ordinary function)

skewtcdf 
Distribution function: skewed t distribution
skewtcdf(x,df,lambda). Returns the value of cumulative distribution function at point x for skewed Student t distribution with df degrees of freedom and "skewness" parameter lambda (with mean 0 and variance 1)

exists (scalar function of a matrix)

exists 
Indicator of existence of a matrix
exists(A). Returns 1 if matrix A exists, 0, if it does not exist
 
    Section >>

rows (scalar function of a matrix)

rows 
Number of rows
rows(A). Returns number of rows in matrix A
 
    Section >>

cols (scalar function of a matrix)

cols 
Number of columns
cols(A). Returns number of columns in matrix A
 
    Section >>

det (scalar function of a matrix)

det 
Determinant
det(A) where A is square matrix. Returns determinant of matrix A
 
    Section >>

lnabsdet (scalar function of a matrix)

lnabsdet 
Logarithm of absolute value of determinant
lnabsdet(A) where A is square matrix Returns logarithm of absolute value of determinant of matrix A
 
    Section >>

tr (scalar function of a matrix)

tr 
Trace
tr(A) where A is square matrix. Returns trace of matrix A (sum of diagonal elements)
 
    Section >>

mean (scalar function of a matrix)

mean 
Mean of elements of a matrix
 
    Section >>

sum (scalar function of a matrix)

sum 
Sum of elements of a matrix
 
    Section >>

ss (scalar function of a matrix)

ss 
Sum of squares of elements of a matrix
 
    Section >>

css (scalar function of a matrix)

css 
Centered sum of squares of elements of a matrix
 
    Section >>

sd (scalar function of a matrix)

sd 
Standard deviation of elements of a matrix
 
    Section >>

var (scalar function of a matrix)

var 
Variance of elements of a matrix
 
    Section >>

skewness (scalar function of a matrix)

skewness 
Skewness of elements of a matrix

kurtosis (scalar function of a matrix)

kurtosis 
Kurtosis of elements of a matrix

excess (scalar function of a matrix)

excess 
Excess kurtosis of elements of a matrix

maxel (scalar function of a matrix)

maxel 
Maximal element of a matrix
 
    Section >>

minel (scalar function of a matrix)

minel 
Minimal element of a matrix
 
    Section >>

med (scalar function of a matrix)

med 
Median of elements of a matrix
 
    Section >>

medsign (scalar function of a matrix)

medsign 
Significance level for median of elements of a matrix

gini (scalar function of a matrix)

gini 
Gini coefficient
gini(x). Returns Gini coefficient for vector x
 
    Section >>

sdet (scalar function of a matrix)

sdet 
Determinant of a symmetric matrix
sdet(A) where A is a symmetric matrix. Returns determinant of matrix A
 
    Section >>

lnsdet (scalar function of a matrix)

lnsdet 
Logarithm of determinant of a symmetric matrix
lnsdet(A) where A is a symmetric matrix. Returns logarithm of determinant of matrix A
 
    Section >>

quantile (scalar function of a matrix)

quantile 
Sample quantile
quantile(x,p). Returns p-th quantile of vector x
 
    Section >>

cdf (scalar function of a matrix)

cdf 
Sample cumulative distribution function
cdf(x,xx). Returns the value of sample cumulative distribution function of vector x at point xx
 
    Section >>

moment (scalar function of a matrix)

moment 
Sample central moment
moment(x,i). Returns i-th order sample central moment of elements of matrix x

cov (scalar function of a matrix)

cov 
Sample covariance of two vectors
cov(x,y). Returns sample covariance of vectors x and y
 
    Section >>

corr (scalar function of a matrix)

corr 
Sample correlation of two vectors
corr(x,y). Returns sample correlation coefficient of vectors x and y
 
    Section >>

select (scalar function of a matrix)

select 
Selection of element in ascending order
select(x,i). Returns i-th element of vector x in ascending order
 
    Section >>

fiperio (scalar function of a matrix)

fiperio 
Fractional integration parameter estimate using periodogram
fiperio(x,n). Returns fractional integration parameter. n is the number of points used

el (scalar function of a matrix)

el 
Element of a matrix
el(A,i,j). Returns (i,j)-th element of matrix A
 
    Section >>

wtmean (scalar function of a matrix)

wtmean 
Weighted trimmed mean
wtmean(X,W,p,beta). Returns p-trimmed mean of vector X with weights W and asymmetry parameter beta

void (matrix function)

void 
Create void matrix
void(). Create (0x0) matrix
 
    Section >>

m (matrix function)

m 
"Empty" function
m(A). Returns matrix A

eval (matrix function)

eval 
Eigenvalues
eval(A) where A is square matrix. Returns a (column) vector of eigenvalues of matrix A.
 
    Section >>

evec (matrix function)

evec 
Eigenvectors
evec(A) where A is square matrix. Returns matrix consisting of eigenvectors of matrix A in columns.
 
    Section >>

diag (matrix function)

diag 
Diagonal matrix from vector
diag(b) where b is column vector. Returns diagonal matrix with b(i) as diagonal elements.
 
    Section >>

diagonal (matrix function)

diagonal 
Diagonal of a matrix
diagonal(A) where A is square matrix. Returns column vector of diagonal elements of matrix A, that is, A[i,i]
 
    Section >>

inv (matrix function)

inv 
Inverse of a matrix
inv(A) where A is a square non-singular matrix.
 
    Section >>

cdfvec (matrix function)

cdfvec 
Sample cumulative distribution function
cdfvec(x). This function for each element x[i] returns the value of sample cumulative distribution function F*[i] (0<=F*[i]<=1)
 
    Section >>

ranks (matrix function)

ranks 
Ranks of elements
ranks(x). This function for each element x[i] returns rank associated with it when vector x is sorted in ascending order.
 
    Section >>

lorenz (matrix function)

lorenz 
Lorenz curve
lorenz(x). Returns Lorenz curve for vector x
 
    Section >>

normpp (matrix function)

normpp 
Normal probability-probability diagram
normpp(x)

csum (matrix function)

csum 
Cumulative sum by columns
csum(A)
    csum(A)[i,j] = Sum (k=1,..,i) A[k,j].
 
    Section >>

transp (matrix function)

transp 
Transpose
transp(A). Returns transposed matrix A
    transp(A)[i,j] = A[j,i].
 
    Section >>

fliph (matrix function)

fliph 
Flip matrix horizontally
fliph(A)
    fliph(A)[i,j] = A[i,m-j+1]  where m is number of columns
 
    Section >>

flipv (matrix function)

flipv 
Flip matrix vertically
flipv(A)
    flipv(A)[i,j] = A[n-i+1,j]  where n is number of rows
 
    Section >>

rotate90 (matrix function)

rotate90 
Rotate matrix 90 degrees clockwise
rotate90(A)
    rotate90(A)[i,j] = A[j,n-i+1]  where n is number of rows
 
    Section >>

vec (matrix function)

vec 
Vector from columns of a matrix
vec(A). Returns column vector combined from columns of matrix A
 
    Section >>

vecr (matrix function)

vecr 
Vector from rows of a matrix
vecr(A). Returns row vectors, combined from rows of matrix A
 
    Section >>

meanmat (matrix function)

meanmat 
Means by column as a matrix of the same dimensionality
 
    Section >>

centr (matrix function)

centr 
Centered matrix (by column)
 
    Section >>

norm (matrix function)

norm 
Standardized matrix (by column)
 
    Section >>

ort (matrix function)

ort 
Orthonormalized matrix (by column)
 
    Section >>

chol (matrix function)

chol 
Cholesky decomposition (of a symmetric matrix)
If T=chol(A) then T is upper triangular matrix such that T'T=A
 
    Section >>

inner (matrix function)

inner 
Inner product of a matrix by itself
inner(A)=A'A
 
    Section >>

outer (matrix function)

outer 
Outer product of a matrix by itself
outer(A)=A.A'
 
    Section >>

sumvec (matrix function)

sumvec 
Sums by column as a column vector
 
    Section >>

meanvec (matrix function)

meanvec 
Means by column as a column vector
 
    Section >>

ssvec (matrix function)

ssvec 
Sums of squares by column as a column vector
 
    Section >>

cssvec (matrix function)

cssvec 
Centered sums of squares by column as a column vector
 
    Section >>

sdvec (matrix function)

sdvec 
Standard deviation by column as a column vector
 
    Section >>

sinv (matrix function)

sinv 
Inverse of a symmetric matrix
sinv(A) where A is a symmetric non-singular matrix.
 
    Section >>

covmat (matrix function)

covmat 
Sample variance-covariance matrix of columns of a matrix
 
    Section >>

corrmat (matrix function)

corrmat 
Sample correlation matrix of columns of a matrix
 
    Section >>

sval (matrix function)

sval 
Singular vectors of a matrix as a vector
If A=U.diag(S).V' is singular value decomposition of matrix A then sval(A)=S
 
    Section >>

ortsvd (matrix function)

ortsvd 
Orthogonalization (left matrix of singular value decomposition)
If A=U.diag(S).V' is singular value decomposition of matrix A then ortsvd(A)=U
 
    Section >>

svdright (matrix function)

svdright 
Singular value decomposition, right matrix
If A=U.diag(S).V' is singular value decomposition of matrix A then svdright(A)=V
 
    Section >>

sort1 (matrix function)

sort1 
Sorts a column vector in ascending order
 
    Section >>

diff (matrix function)

diff 
First differences along columns of a matrix
 
    Section >>

diffln (matrix function)

diffln 
First differences of logarithms along columns of a matrix (logarithmic rates of growth)
 
    Section >>

testcols (matrix function)

testcols 
Existence of missing values in columns of a matrix
testcols(A). Returns a column vector with typical element equal to 1 if corresponding column of matrix A does not contain missing values, and 0 otherwise
 
    Section >>

testrows (matrix function)

testrows 
Existence of missing values in rows of a matrix
testrows(A). Returns a column vector with typical element equal to 1 if corresponding row of matrix A does not contain missing values, and 0 otherwise
 
    Section >>

clearcols (matrix function)

clearcols 
Delete incomplete columns
clearcols(A). Returns matrix A without incomplete columns (columns which contain missing values)
 
    Section >>

clearrows (matrix function)

clearrows 
Delete incomplete rows
clearrows(A). Returns matrix A without incomplete rows (rows which contain missing values)
 
    Section >>

fft (matrix function)

fft 
Fast Fourier transform
fft(x) where x is a matrix with two columns, of which the first one contains real part and the second one contains imaginary part. Number of rows in x must be a power of two. Returns discrete Fourier transform of corresponding complex vector
 
    Section >>

ifft (matrix function)

ifft 
Inverse fast Fourier transform
ifft(x) where x is a matrix with two columns, of which the first one contains real part and the second one contains imaginary part. Number of rows in x must be a power of two. Returns inverse discrete Fourier transform of corresponding complex vector
 
    Section >>

daub4 (matrix function)

daub4 
Fast wavelet transform, Daubechies-4
 
    Section >>

daub4inv (matrix function)

daub4inv 
Inverse fast wavelet transform, Daubechies-4
 
    Section >>

vech (matrix function)

vech 
Vector from lower triangular part of a matrix
vech(A). Returns column vector combined from lower triangular part of matrix A (by columns)
 
    Section >>

unvech (matrix function)

unvech 
Symmetric matrix from a vector
unvech(X). Returns a symmetric matrix (m x m) which is produced from a vector X of length m(m+1)/2. This function is reverse to vech
 
    Section >>

lndet (matrix function)

lndet 
Logarithm of determinant
lndet(A) where A is square matrix Returns à vector of length 2. The first element is logarithm of absolute value of determinant of matrix A; the second element is sign (-1, 0 or 1)
 
    Section >>

pacf (matrix function)

pacf 
Transform autocovariance function to PACF
pacf(ACov) where ACov is vector of autocovariances
 
    Section >>

acov (matrix function)

acov 
Autocovariance function
acov(X). Returns autocovariance function of vector X
 
    Section >>

toepl (matrix function)

toepl 
Create Toeplitz matrix
toepl(V). Creates (symmetric) Toeplitz matrix from vector V
 
    Section >>

roots (matrix function)

roots 
Roots of a real polynomial
roots(Coef) where Coef is a vector of coefficients, a[0],...,a[m]
 
    Section >>

invroots (matrix function)

invroots 
Real polynomial coefficients from its roots
invroots(Roots) where Roots is a (m x 2) matrix of roots. Returns vector of coefficients, a[0],...,a[m]
 
    Section >>

fliproots (matrix function)

fliproots 
Flip all roots of real polynomial outside unit circle
fliproots(Coef) where Coef is a vector of coefficients, a[0],...,a[m]. Returns vector of coefficients for the transformed polynomial.
 
    Section >>

genacov (matrix function)

genacov 
Generate stationary gaussian process
genacov(acov). Returns generated gaussian process with autocovariance function acov
 
    Section >>

genacovfft (matrix function)

genacovfft 
Generate stationary gaussian process
genacovfft(acov). Returns generated gaussian process with autocovariance function acov. Fast Fourier transform (FFT) is used. Length of acov should be a power of two plus one (2^k+1).
 
    Section >>

sort (matrix function)

sort 
Sort a matrix according to order of elements in a vector
sort(A,x) where A is a matrix and x is a vector. Returns matrix A with rows sorted according to ascending order of elements in vector x
 
    Section >>

regr (matrix function)

regr 
Coefficients of linear regression
regr(y,X). Returns coefficients of linear regression of y on X (method of least squares)
 
    Section >>

sysequ (matrix function)

sysequ 
Solve a system of linear equations
sysequ(A,B). Returns the solution x to the system of linear equations Ax=B where A is a square matrix
 
    Section >>

projoff (matrix function)

projoff 
Projection on orthogonal subspace
projoff(A,B). Returns matrix consisting of projections of columns of matrix A on subspace which is orthogonal to subspace spanned by columns of matrix B
 
    Section >>

projonto (matrix function)

projonto 
Projection on a subspace spanned by columns of a matrix
projonto(A,B). Returns matrix consisting of projections of columns of matrix A on subspace spanned by columns of matrix B
 
    Section >>

kron (matrix function)

kron 
Kronecker product
kron(A,B). Returns Kronecker product of matrices A and B

kronh (matrix function)

kronh 
Rowwise Kronecker product
kronh(A,B). Returns matrix with rows equal to kron(A(i),B(i)) where A(i) and B(i) are i-th rows of matrix A and matrix B, and kron() is Kronecker product
 
    Section >>

kronv (matrix function)

kronv 
Columnwise Kronecker product
kronh(A,B). Returns matrix with columns equal to kron(A[i],B[i]), where A[i] and B[i] are i-th columns of matrix A and matrix B, and kron() is Kronecker product
 
    Section >>

extcols (matrix function)

extcols 
Extract columns
extcols(A,d). Returns matrix consisting of those columns of matrix A for which the corresponding element of vector d is positive
 
    Section >>

extrows (matrix function)

extrows 
Extract rows
extrows(A,d). Returns matrix consisting of those rows of matrix A for which the corresponding element of vector d is positive
 
    Section >>

delcols (matrix function)

delcols 
Delete columns
delcols(A,d). Returns matrix which results from matrix A by deleting those columns for which the corresponding element of vector d is positive
 
    Section >>

delrows (matrix function)

delrows 
Delete rows
delrows(A,d). Returns matrix which results from matrix A by deleting those rows for which the corresponding element of vector d is positive
 
    Section >>

conv (matrix function)

conv 
Convolution
conv(x,y). Returns convolution of vectors x and y. The result could also be viewed as a product of two polynomials
 
    Section >>

wmeanvec (matrix function)

wmeanvec 
Weighted average
wmeanvec(x,w). Returns weighted averages with weights w by columns of x as a column vector

armafilter (matrix function)

armafilter 
ARMA filter
armafilter(Y,AR,MA). Returns a filtered series for ARMA process with parameters vectors, defined by matrices AR and MA
 
    Section >>

wtmean1 (matrix function)

wtmean1 
Trimmed mean
wtmean1(X,W,P,Beta). Returns a matrix consisting of P[i]-trimmed means of vector X with weights W and asymmetry parameter Beta[J]

idenmat (matrix function)

idenmat 
Identity matrix
idenmat(n). Returns identity matrix (n x n)
 
    Section >>

onesvec (matrix function)

onesvec 
Vector of ones
onesvec(n). Returns a column vector of length n with all elements equal to 1
 
    Section >>

zerosvec (matrix function)

zerosvec 
Vector of zeros
zerosvec(n). Returns a column vector of length n with all elements equal to 0
 
    Section >>

vec123 (matrix function)

vec123 
Vector 1,2,3,... (linear trend)
vec123(n). Returns a column vector of length n with i-th element equal to i
 
    Section >>

trend (matrix function)

trend 
Linear trend
trend(n). Returns a column vector of length n with i-th element equal to i
 
    Section >>

n01vec (matrix function)

n01vec 
Vector from N(0,1)
n01vec(n). Returns a column vector of length n with elements generated independently from standard normal distribution, N(0,1)
 
    Section >>

u01vec (matrix function)

u01vec 
Vector from U[0,1]
u01vec(n). Returns a column vector of length n with elements generated independently from uniform distribution on [0,1]
 
    Section >>

onesmat (matrix function)

onesmat 
Matrix of ones
onesmat(n,m). Returns a (n x m) matrix with all elements equal to 1
 
    Section >>

zerosmat (matrix function)

zerosmat 
Matrix of zeros
zerosmat(n,m). Returns a (n x m) matrix with all elements equal to 0
 
    Section >>

n01mat (matrix function)

n01mat 
Matrix from N(0,1)
n01mat(n,m). Returns a (n x m) matrix with elements generated independently from standard normal distribution, N(0,1)
 
    Section >>

u01mat (matrix function)

u01mat 
Matrix from U[0,1]
u01mat(n,m). Returns a (n x m) matrix with elements generated independently from uniform distribution on [0,1]
 
    Section >>

dummy (matrix function)

dummy 
Dummy variable (unit vector)
dummy(n,i). Returns a column vector of length n with i-th element equal to 1 and all other elements equal to 0
 
    Section >>

clonev (matrix function)

clonev 
Matrix reproduced vertically
clonev(A,k). If A is a (n x m) matrix then the function returns (nk x m) matrix
 
    Section >>

cloneh (matrix function)

cloneh 
Matrix reproduced horizontally
cloneh(A,k). If A is a (n x m) matrix then the function returns (n x mk) matrix
 
    Section >>

col (matrix function)

col 
Column of a matrix
col(A,i). Extracts i-th column from matrix A
 
    Section >>

row (matrix function)

row 
Row of a matrix
row(A,i). Extracts i-th row from matrix A
 
    Section >>

lag (matrix function)

lag 
Lag of a matrix
lag(A,k). Returns matrix of the same dimensionality as matrix A with (i,j)-th element equal to (i-k,j)-th element of matrix A for i>k and missing value otherwise
 
    Section >>

clag (matrix function)

clag 
Lag of a matrix (circular)
clag(A,k). Returns matrix of the same dimensionality as matrix A with (i,j)-th element equal to (i-k,j)-th element of matrix A for i>k and (n+i-k,j)-th element of matrix A for i<=k where n is number of rows in matrix A (if 0<=k<=n)
 
    Section >>

slice (matrix function)

slice 
Slice a vector
slice(X,len). Slices a long vector X on pieces of length len and combines them in a matrix (operation is opposite to vec)
 
    Section >>

bssample (matrix function)

bssample 
Sample for bootstrap
bssample(A,n). Returns a matrix  constructed as a sample from rows of matrix A with replacement of size n
 
    Section >>

armaacov (matrix function)

armaacov 
Autocovariance function of ARMA process
armaacov(AR,MA,n). Returns autocovariance function of ARMA process with parameters vectors, defined by matrices AR and MA. n is length of the result
 
    Section >>

genarma (matrix function)

genarma 
Generate ARMA process
genarma(AR,MA,n). Returns generated series of ARMA process with parameters vectors, defined by matrices AR and MA. n is length of the series
 
    Section >>

wbssample (matrix function)

wbssample 
Weighted sample for bootstrap
wbssample(A,W,n). Returns a matrix  constructed as a sample from rows of matrix A with replacement of size n with probabilities given by vector of weights W
 
    Section >>

submat (matrix function)

submat 
Extracts submatrix of a matrix
submat(A,top,bottom,left,right). Returns submatrix of matrix A
 
    Section >>

fdiff (matrix function)

fdiff 
Fractional difference
fdiff(X,D). Fractional difference of order D of variable X
 
    Section >>

hpfilter (matrix function)

hpfilter 
Hodrick-Prescott filter
hpfilter(X,Lambda). Hodrick-Prescott filter for variable X with smoothing parameter Lambda. Kydland and Prescott suggested to use Lambda = 1600 for quarterly data
 
    Section >>

replace (matrix function)

replace 
Replace elements in a matrix
replace(A,x,y). Replaces all elements x by y in matrix A
 
    Section >>

wmomentvec (matrix function)

wmomentvec 
Weighted moment
wmomentvec(x,w,i). Returns central weighted moments of i-th order with weights w by columns of x as a column vector

wtmeanweights (matrix function)

wtmeanweights 
Weighted trimmed mean actual weights
wtmeanweights(X,W,p,beta). Returns actual weights for  p-trimmed mean of vector X with weights W and asymmetry parameter beta

fiacov (matrix function)

fiacov 
Autocovariance function of fractionally integrated process
fiacov(d,n). Returns autocovariance function of ARFIMA(0,d,0) process. n is length of the result
 
    Section >>

genfi (matrix function)

genfi 
Generate fractionally integrated process
genfi(d,n). Returns generated series of ARFIMA(0,d,0) process. n is length of the series

create (matrix function)

create 
Create matrix filled by a number
create(m,n,x). Create (m x n) matrix filled by number x
 
    Section >>

grid (matrix function)

grid 
Uniform grid
grid(x1,x2,n). Returns a column vector of length n+1 with i-th element equal to ((n-i)*x1+i*x2)/n
 
    Section >>

genarfima (matrix function)

genarfima 
Generate ARFIMA process
genarfima(d,AR,MA,n). Returns generated ARFIMA(p,d,q) series of length n
 
    Section >>

arfimaacov (matrix function)

arfimaacov 
Autocovariance function of ARFIMA process
arfimaacov(d,AR,MA,n). Returns autocovariance function of ARFIMA(p,d,q) process with parameters vectors, defined by matrices AR and MA. n is length of the result

Random number generation

Functions for generating random numbers

~u01 
Uniform distribution
~n01 
Standard normal distribution
~exp 
Standard exponential distribution
~logn01 
Standard lognormal distribution
~bin 
Binomial distribution
~t 
t distribution
~chisq 
Chi-square distribution
~sgamma 
Standard gamma distribution
~poi 
Poisson distribution
~beta 
Beta distribution
~gamma 
Gamma distribution
~f 
F distribution
~ev1 
Type 1 extreme value distribution (Gumbel distribution)
~ev3 
Type 3 extreme value distribution (Weibull distribution)
 

    See also
   Functions

Elementary functions

ln 
Natural logarithm (logarithm to base e)
exp 
Exponential function
sqrt 
Square root
sqr 
Square
sin 
Sine
cos 
Cosine
abs 
Absolute value
power 
Power function
boxcox 
Box-Cox transformation
lg 
Denary logarithm (logarithm to base 10)
power10 
Power of 10
max 
Maximum of two numbers
min 
Minimum of two numbers
int 
Integer part of a number
round 
Rounding
roundd 
Rounding
frac 
Fractional part of a number
div 
Integer division
mod 
Remainder on division
tan 
Tangent
arctan 
Arc tangent
arcsin 
Arc sine
arccos 
Arc cosine
lnrel 
Natural logarithm of 1+x
expm1 
Exponent minus 1
 

    See also
   Functions

Logical and indicator functions

sgn 
Sign of a number
i 
Indicator function
not 
Logical negation
eq 
Equality indicator
neq 
Inequality indicator
lt 
Indicator "less then"
gt 
Indicator "greater then"
le 
Indicator "less then or equal"
ge 
Indicator "greater then or equal"
or 
Logical "or"
xor 
Logical "xor" (exclusive "or")
and 
Logical "and"
if 
Logical choice
 

    See also
   Functions

Functions for statistical distributions

n01cdf 
CDF of standard normal distribution
lnn01cdf 
Logarithm of CDF of standard normal distribution
n01invcdf 
Inverse distribution function: standard normal distribution
n01den 
Density of standard normal distribution
lnn01den 
Logarithm of density of standard normal distribution
normden 
Density of normal distribution
lnnormden 
Logarithm of density of normal distribution
chisqsign 
Significance level: chi-square distribution
chisqcdf 
Distribution function: chi-square distribution
chisqinvcdf 
Inverse distribution function: chi-square distribution
chisqden 
Density of chi-square distribution
tsign 
Significance level: t distribution
tcdf 
Distribution function: t distribution
tinvcdf 
Inverse distribution function: t distribution
tden 
Density of t distribution
lntden 
Logarithm of density of t distribution
fsign 
Significance level: F distribution
fcdf 
Distribution function: F distribution
finvcdf 
Inverse distribution function: F distribution
fden 
Density of F distribution
gammacdf 
Distribution function: gamma distribution (incomplete gamma function)
gammaden 
Density of gamma distribution
lngammaden 
Logarithm of density of gamma distribution
betacdf 
Distribution function: beta distribution (incomplete beta function)
betaden 
Density of beta distribution
lnbetaden 
Logarithm of density of beta distribution
betainvcdf 
Inverse distribution function: beta distribution
logisticcdf 
CDF of logistic distribution
lnlogisticcdf 
Logarithm of CDF of logistic distribution
logisticinvcdf 
Inverse distribution function: logistic distribution
logisticden 
Density of logistic distribution
lnlogisticden 
Logarithm of density of logistic distribution
 

    See also
   Functions

Additional functions for statistical distributions

adfcdf 
Augmented Dickey-Fuller test (ADF)
nctcdf 
Distribution function: non-central t distribution
nctden 
Density of noncentral t distribution
lnnctden 
Logarithm of density of noncentral t distribution
nctmean 
Mean of noncentral t distribution
nctvar 
Variance of noncentral t distribution
gedcdf 
Distribution function: generalized error distribution
gedden 
Density of generalized error distribution
lngedden 
Logarithm of density of generalized error distribution
 

    See also
   Functions

Special functions

lngamma 
Natural logarithm of gamma function
gamma 
Gamma function
lnbeta 
Natural logarithm of beta function
digamma 
Digamma function (Psi function) (the first derivative of natural logarithm of gamma function)
trigamma 
Trigamma function (the second derivative of natural logarithm of gamma function)
gammacdf 
Distribution function: gamma distribution (incomplete gamma function)
gammaden 
Density of gamma distribution
lngammaden 
Logarithm of density of gamma distribution
betacdf 
Distribution function: beta distribution (incomplete beta function)
betaden 
Density of beta distribution
lnbetaden 
Logarithm of density of beta distribution
betainvcdf 
Inverse distribution function: beta distribution
 

    See also
   Functions

Matrix functions: algebra of matrices

In this section functions are listed which could be used for various algebraic operations with matrices

diag 
Diagonal matrix from vector
diagonal 
Diagonal of a matrix
inv 
Inverse of a matrix
transp 
Transpose
eval 
Eigenvalues
evec 
Eigenvectors
centr 
Centered matrix (by column)
norm 
Standardized matrix (by column)
ort 
Orthonormalized matrix (by column)
chol 
Cholesky decomposition (of a symmetric matrix)
sinv 
Inverse of a symmetric matrix
inner 
Inner product of a matrix by itself
outer 
Outer product of a matrix by itself
sval 
Singular vectors of a matrix as a vector
ortsvd 
Orthogonalization (left matrix of singular value decomposition)
svdright 
Singular value decomposition, right matrix
regr 
Coefficients of linear regression
sysequ 
Solve a system of linear equations
projoff 
Projection on orthogonal subspace
projonto 
Projection on a subspace spanned by columns of a matrix
kronh 
Rowwise Kronecker product
kronv 
Columnwise Kronecker product
fft 
Fast Fourier transform
ifft 
Inverse fast Fourier transform
conv 
Convolution
covmat 
Sample variance-covariance matrix of columns of a matrix
corrmat 
Sample correlation matrix of columns of a matrix
sort1 
Sorts a column vector in ascending order
sort 
Sort a matrix according to order of elements in a vector
diff 
First differences along columns of a matrix
diffln 
First differences of logarithms along columns of a matrix (logarithmic rates of growth)
csum 
Cumulative sum by columns
ranks 
Ranks of elements
cdfvec 
Sample cumulative distribution function
lorenz 
Lorenz curve
meanmat 
Means by column as a matrix of the same dimensionality
sumvec 
Sums by column as a column vector
meanvec 
Means by column as a column vector
ssvec 
Sums of squares by column as a column vector
cssvec 
Centered sums of squares by column as a column vector
sdvec 
Standard deviation by column as a column vector
 

    See also
   Functions

Matrix functions: submatrices

These functions could be used to extract some part of a matrix

el 
Element of a matrix
row 
Row of a matrix
col 
Column of a matrix
diagonal 
Diagonal of a matrix
submat 
Extracts submatrix of a matrix
extcols 
Extract columns
extrows 
Extract rows
delcols 
Delete columns
delrows 
Delete rows
testcols 
Existence of missing values in columns of a matrix
testrows 
Existence of missing values in rows of a matrix
clearcols 
Delete incomplete columns
clearrows 
Delete incomplete rows
 

    See also
   Functions

Matrix functions: various transformations

These functions change the order of elements in a matrix, etc.

replace 
Replace elements in a matrix
vec 
Vector from columns of a matrix
vecr 
Vector from rows of a matrix
slice 
Slice a vector
vech 
Vector from lower triangular part of a matrix
unvech 
Symmetric matrix from a vector
clonev 
Matrix reproduced vertically
cloneh 
Matrix reproduced horizontally
diag 
Diagonal matrix from vector
transp 
Transpose
fliph 
Flip matrix horizontally
flipv 
Flip matrix vertically
rotate90 
Rotate matrix 90 degrees clockwise
sort1 
Sorts a column vector in ascending order
sort 
Sort a matrix according to order of elements in a vector
ranks 
Ranks of elements
diff 
First differences along columns of a matrix
diffln 
First differences of logarithms along columns of a matrix (logarithmic rates of growth)
lag 
Lag of a matrix
clag 
Lag of a matrix (circular)
csum 
Cumulative sum by columns
 

    See also
   Functions

Creation of some specific matrices

List of functions for creating matrices of some special types

idenmat 
Identity matrix
onesvec 
Vector of ones
zerosvec 
Vector of zeros
vec123 
Vector 1,2,3,... (linear trend)
trend 
Linear trend
grid 
Uniform grid
n01vec 
Vector from N(0,1)
u01vec 
Vector from U[0,1]
onesmat 
Matrix of ones
zerosmat 
Matrix of zeros
n01mat 
Matrix from N(0,1)
u01mat 
Matrix from U[0,1]
dummy 
Dummy variable (unit vector)
void 
Create void matrix
create 
Create matrix filled by a number
toepl 
Create Toeplitz matrix
genarma 
Generate ARMA process
genacov 
Generate stationary gaussian process
genacovfft 
Generate stationary gaussian process
genarfima 
Generate ARFIMA process
 

    See also
   Functions

Scalar function of matrix

These functions produce a number (scalar)

exists 
Indicator of existence of a matrix
rows 
Number of rows
cols 
Number of columns
det 
Determinant
lndet 
Logarithm of determinant
lnabsdet 
Logarithm of absolute value of determinant
sdet 
Determinant of a symmetric matrix
lnsdet 
Logarithm of determinant of a symmetric matrix
tr 
Trace
el 
Element of a matrix
mean 
Mean of elements of a matrix
sum 
Sum of elements of a matrix
ss 
Sum of squares of elements of a matrix
css 
Centered sum of squares of elements of a matrix
sd 
Standard deviation of elements of a matrix
var 
Variance of elements of a matrix
maxel 
Maximal element of a matrix
minel 
Minimal element of a matrix
med 
Median of elements of a matrix
quantile 
Sample quantile
gini 
Gini coefficient
cdf 
Sample cumulative distribution function
cov 
Sample covariance of two vectors
corr 
Sample correlation of two vectors
select 
Selection of element in ascending order
 

    See also
   Functions

Matrix functions: miscellaneous functions

acov 
Autocovariance function
pacf 
Transform autocovariance function to PACF
fdiff 
Fractional difference
hpfilter 
Hodrick-Prescott filter
armafilter 
ARMA filter
armaacov 
Autocovariance function of ARMA process
fiacov 
Autocovariance function of fractionally integrated process
roots 
Roots of a real polynomial
invroots 
Real polynomial coefficients from its roots
fliproots 
Flip all roots of real polynomial outside unit circle
bssample 
Sample for bootstrap
wbssample 
Weighted sample for bootstrap
daub4 
Fast wavelet transform, Daubechies-4
daub4inv 
Inverse fast wavelet transform, Daubechies-4
 

    See also
   Functions

' 
Matrix transpose operator
 
   
Details >>

-

- 
Difference or negation operation
 
    Details >>

 
Space or matrix concatenation operation
 
    Details >>

!

! 
Suffix of command name
 
    Details >>
" 
Quotes
 
   
Details >>

#

# 
Prefix of temporary matrix
 
    Details >>

$csum

$csum 
Dynamic function: Cumulative sum
 
    Section >>

$diff

$diff 
Dynamic function: Differences
 
    Section >>

$diffln

$diffln 
Dynamic function: Differences of logarithms
 
    Section >>

$i

$i 
Artificial variable: Observation number
 
    Details >>

$i2

$i2 
Artificial variable:
  Observation number squared
 
    Details >>

$i3

$i3 
Artificial variable:
Observation number cubed
 
    Details >>

$i4

$i4 
Artificial variable:
4-th power of observation number
 
    Details >>

$i5

$i5 
Artificial variable:
5-th power of observation number
 
    Details >>

$j

$j 
Artificial variable: Column number
 
    Details >>

$l

$l 
Dynamic function: Self lag
 
    Section >>

$lag

$lag 
Dynamic function: Lag
 
    Section >>

$m1

$m1 
Dummy variable: 1st month
 
    Details >>

$m10

$m10 
Dummy variable: 10th month
 
    Details >>

$m11

$m11 
Dummy variable: 11th month
 
    Details >>

$m12

$m12 
Dummy variable: 12th month
 
    Details >>

$m2

$m2 
Dummy variable: 2nd month
 
    Details >>

$m3

$m3 
Dummy variable: 3rd month
 
    Details >>

$m4

$m4 
Dummy variable: 4th month
 
    Details >>

$m5

$m5 
Dummy variable: 5th month
 
    Details >>

$m6

$m6 
Dummy variable: 6th month
 
    Details >>

$m7

$m7 
Dummy variable: 7th month
 
    Details >>

$m8

$m8 
Dummy variable: 8th month
 
    Details >>

$m9

$m9 
Dummy variable: 9th month
 
    Details >>

$q1

$q1 
Artificial variable: 1st quarter
 
    Details >>

$q2

$q2 
Dummy variable: 2nd quarter
 
    Details >>

$q3

$q3 
Dummy variable: 3rd quarter
 
    Details >>

$q4

$q4 
Dummy variable: 4th quarter
 
    Details >>

$t

$t 
Artificial variable: Trend
 
    Details >>

$t2

$t2 
Artificial variable: Trend squared
 
    Details >>

$t3

$t3 
Artificial variable: Trend cubed
 
    Details >>

$t4

$t4 
Artificial variable: 4-th power of trend
 
    Details >>

$t5

$t5 
Artificial variable: 5-th power of trend
 
    Details >>

$w1

$w1 
Dummy variable: 1st day of the week
 
    Details >>

$w2

$w2 
Dummy variable: 2nd day of the week
 
    Details >>

$w3

$w3 
Dummy variable: 3rd day of the week
 
    Details >>

$w4

$w4 
Dummy variable: 4th day of the week
 
    Details >>

$w5

$w5 
Dummy variable: 5th day of the week
 
    Details >>

$w6

$w6 
Dummy variable: 6th day of the week
 
    Details >>

$w7

$w7 
Dummy variable: 7th day of the week
 
    Details >>

%

% 
Prefix of parameter in formula
 
    Details >>

&

& 
Horizontal combination of matrices
The first symbol of option
 
    Details >>

&/

&/ 
Separates weights in weighted regression
 
    Details >>

(

( 
Left parenthesis in formula

(*

(* 
Start of comments
 
    Details >>

)

) 
Right parenthesis in formula

*

* 
Multiplication operation (direct multiplication for matrices)
 
    Details >>

*)

*) 
End of comments
 
    Details >>

*/

*/ 
End of comments
 
    Details >>

,

, 
Separates function arguments

.

. 
Matrix multiplication or decimal point
 
    Details >>

..

.. 
Interval sign
 
    Details >>

/

/ 
Division operation
 
    Details >>

/*

/* 
Start of comments
 
    Details >>

//

// 
End of line comments
 
    Details >>

:

: 
Separates left-hand side and right-hand side in regression.
Operator for linear regression coefficients in matrix formula
 
    Details >>

:=

:= 
Element-by-element assignment
 
    Details >>

;

; 
Finishes command in macro
 
    Details >>

?

? 
Sort operator
 
    Details >>

@

@ 
Prefix of scalar
 
    Details >>

@missing

@missing 
Scalar: Missing value
 
    Details >>

@na

@na 
Scalar: Missing value
 
    Details >>

@pi

@pi 
Scalar: Number Pi
 
    Details >>

@timer

@timer 
Scalar: Timer value
 
    Details >>

[

[ 
Right bracket for variable names and lags
 
    Details >>

\

\  
Prefix of model data
 
    Details >>

]

] 
Left bracket for variable names and lags
 
    Details >>

^

^ 
Raising to power operator
 
    Details >>

{

{ 
Right bracket for coverage of observations
 
    Details >>

|

| 
Vertical combination of matrices
 
    Details >>

}


Left bracket for coverage of observations
 
    Details >>

~

~ 
Matrix inversion operator
 
    Details >>

+

+ 
Summation operation
 
    Details >>

<

< 
Relational operator.
1 if less, 0 otherwise
 
    Details >>

<=

<= 
Relational operator.
1 if less or equal, 0 otherwise
 
    Details >>
>

<>

<> 
Relational operator.
1 if not equal, 0 otherwise
 
    Details >>

=

= 
Equality relational operator.
1 if equal, 0 otherwise
 
    Details >>

==

== 
Matrix assignment
 
    Details >>
>

>

> 
Relational operator.
1 if greater, 0 otherwise
 
    Details >>
=>

>=

>= 
Relational operator.
1 if greater or equal, 0 otherwise
 
    Details >>

2SLS!

2SLS! 
Simultaneous equations, two-stage least squares (generalized instrumental variables method)
2sls! <list of endogenous variables> : <list of exogenous variables>
 
    Details >>
 
    Section >>

3SLS!

3SLS! 
Simultaneous equations, tree-stage least squares
3sls! <list of endogenous variables> : <list of exogenous variables>
 
    Details >>
 
    Section >>

ACF!

ACF! 
Autocorrelation function estimate
acf! <variable>
 
    Details >>

ACovFilter!

ACovFilter! 
Autocovariance filter
acovfilter! <input vector of autocovariances> <Y vector> <Sig2 vector> <E vector>
 
    Section >>

ADF!

ADF! 
Dickey-Fuller statistics
adf! (<type>,<difference>) <variable>
 
    Details >>

AR1!

AR1! 
Regression with AR(1) process in error term
ar1! <dependent variable> : <list of regressors>
 
    Section >>

ARFIMAFIGARCH!

ARFIMAFIGARCH! 
ARFIMA-FIGARCH
arfimafigarch! (<p1>,<q1>,<p2>,<q2>) <variable>
 
    Details >>
 
    Section >>

ARMA!

ARMA! 
Regression with ARMA error
arma! (<p>,<q>) <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

ARMAMM!

ARMAMM! 
ARMA coefficients from autocovariances (method of moments for ARMA)
armamm! <input vector of autocovariances> <vector of AR parameters> <vector of MA parameters> <error variance> &ar <AR order> &ma <MA order>
 
    Section >>

Ask!

Ask! 
Ask whether to halt macros
ask! <string expression>
 
    Details >>

AutoLog!

AutoLog! 
Set auto log file
autolog! <file name>
 
    Section >>

Beep!

Beep! 
Beep sound in macros
 
    Details >>

Binning!

Binning! 
Binning
binning! <variable> &nbins <number of bins>
 
    Section >>

BoxJen!

BoxJen! 
Box-Jenkins model (ARIMA)
boxjen! (<p>,<q>) <variable> &d <d>
 
    Details >>
 
    Section >>

break!

break! 
Control command in macros.
Break cycle
 
    Details >>

Clear!

Clear! 
Delete temporary matrices
 
    Section >>

continue!

continue! 
Control command in macros.
Continue cycle (next loop)
 
    Details >>

Copy!

Copy! 
Copy matrix or variable
copy! <matrix or variable> <matrix or variable>
 
    Section >>

Corr!

Corr! 
Correlation matrix
corr! <list of variables>
 
    Details >>

dderiv

dderiv 
Evaluate formula directional derivative
 
    Details >>

dderiv2

dderiv2 
Evaluate formula second directional derivatives
 
    Details >>

Delete!

Delete! 
Delete matrices (variables)
delete! <List of matrices>
 
    Section >>

deriv

deriv 
Evaluate formula derivatives
 
    Details >>

Descript!

Descript! 
Descriptive statistics
descript! <variable>
  Write table to file:
descript! <variable> <file name>
"File name" could be logfile. Then the table will be written into the current log file
 
    Details >>

Edit!

Edit! 
Open matrix in table editor
edit! <matrix>
 
    Section >>

else!

else! 
Control command in macros.
Optional part of "if" statement
 
    Details >>

endfor!

endfor! 
Control command in macros.
The end of cycle
 
    Details >>

endif!

endif! 
Control command in macros.
The final part of "if" statement
 
    Details >>

endloop!

endloop! 
Control command in macros.
The end of cycle
 
    Details >>

EqSys!

EqSys! 
Choose an equation from a system of regression equations
eqsys! <equation number (0 - whole system)>
 
    Section >>

EstTable!

EstTable! 
Ñall "Estimates and statistics" panel
esttable!
  Write estimation table to file:
esttable! <file name>
"File name" could be logfile. Then the table will be written into the current log file
 
    Section >>

exit!

exit! 
Control command in macros. Halts the macros
 
    Details >>

External!

External! 
Run external file
external! <file name>
 
    Section >>

FIML!

FIML! 
Simultaneous equations, FIML method
fiml! <list of endogenous variables> : <list of exogenous variables>
 
    Details >>
 
    Section >>

for!

for! 
Control command in macros.
The beginning of cycle
for! <scalar> <initial value> <final value>
 
    Details >>

fplot!

fplot! 
Function plot
fplot! <list of functions> &lbound <left bound> &rbound <right bound> &n <number of intervals> &plotkind <kind of plot [-][*][|] >
or
fplot! "<variable name>" <list of functions> <options>
 
    Section >>

fu

fu 
Evaluate formula as function of its parameters
 
    Details >>

GARCH!

GARCH! 
Regression with GARCH error
garch!(<p>,<q>) <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

goto!

goto! 
Unconditional or conditional jump
goto! <label name> (unconditional jump)
goto! <label name> <scalar expression> (conditional jump)
Condition of jump is that scalar is positive
 
    Details >>

Graph!

Graph! 
Graph trial
graph! <>

Hermite!

Hermite! 
Density estimation, Hermite series SNP
hermite! <variable>
 
    Details >>
 
    Section >>

Hessenberg!

Hessenberg! 
Hessenberg decomposition of a matrix
hessenberg! <input square matrix> <name of L matrix> <name of D matrix> <name of U matrix>
 
    Section >>

Hist!

Hist! 
Histogram
hist! <variable>
 
    Section >>

if!

if! 
Control command in macros.
The beginning of "if" statement
if! <condition>
Condition is a scalar expression (positive for true)
 
    Details >>

Import!

Import! 
Import matrix from file
import! <matrix name> <file name>
"File name" could be clipboard. Then matrix will be imported from the clipboard
 
    Details >>

IV!

IV! 
Regression with instrumental variables
iv!  <dependent variable> : <list of regressors> : <list of instruments>
 
    Details >>
 
    Section >>

Kernel!

Kernel! 
Kernel nonparametric density estimation
  kernel! <variable>
 
    Details >>
 
    Section >>

KernelReg!

KernelReg! 
Kernel nonparametric regression
kernelreg! <dependent variable> : <explanatory variable>
 
    Details >>
 
    Section >>

label!

label! 
Label in macros:
label! <label name>
 
    Details >>

LDU!

LDU! 
LDU decomposition of a matrix
ldu! <input square matrix> <name of L matrix> <name of D matrix> <name of U matrix>
 
    Section >>

List!

List! 
Write matrix to file
list! <file name> <matrix>
"File name" could be logfile. Then the matrix will be written into the current log file
 
    Section >>

LogFile!

LogFile! 
Set log file
logfile! <file name>
 
    Section >>

Logit!

Logit! 
Logit
logit! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

loop!

loop! 
Control command in macros.
The beginning of cycle
 
    Details >>

MA1!

MA1! 
Regression with MA(1) process in error term
ma1! <dependent variable> : <list of regressors>
 
    Section >>

MHetero!

MHetero! 
Regression with multiplicative heteroskedasticity
mhetero!  <dependent variable> : <list of regressors> : <list of multiplicative heteroskedasticity regressors>
 
    Details >>
 
    Section >>

Min!

Min! 
Function minimization
min! <formula>
 
    Details >>

Mixture!

Mixture! 
 
 
    Section >>

MLE!

MLE! 
Method of maximum likelihood
mle! <contribution to loglikelihood of a single observation>
 
    Details >>
 
    Section >>

NameVars!

NameVars! 
Rename all variables of a matrix
namevars! <matrix> <new names of variables>
 
    Section >>

NegBin!

NegBin! 
Negative binomial regression (NegBin-2)
negbin! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

nextif!

nextif! 
Control command in macros.
nextif! <condition>
Optional part of "if" statement
 
    Details >>

NLIV!

NLIV! 
Nonlinear instrumental variables method
nliv! <formula1> : <formula2> : <list of instruments>
 
    Details >>

NLS!

NLS! 
Nonlinear least squares
nls! <variable> : <formula>
 
    Details >>
 
    Section >>

Ordered!

Ordered! 
Ordered regression
ordered! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

Plot!

Plot! 
Plot. Observation number as X-axis variable
plot! <list of variables>
 
    Section >>

Plot3D!

Plot3D! 
3D plot based on matrix
plot3d! <matrix>
 
    Section >>

Poisson!

Poisson! 
Poisson regression
poisson! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

Polynom!

Polynom! 
Polynomial regression
polynom! <dependent variable> : <explanatory variable>
 
    Details >>
 
    Section >>

Print!

Print! 
Write string to file
print! <file name> <string expression>
  Clear file:
print! <file name> clear
"File name" could be logfile. Then string expression will be written into the current log file
 
    Section >>

Probit!

Probit! 
Probit
probit! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

QReg!

QReg! 
Quantile regression
qreg! <dependent variable> : <list of regressors> &prob <quantile>
    Median regression (0.5-quantile)
  qreg! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

Rename!

Rename! 
Rename matrix or variable
rename! <matrix or variable> <new name>
 
    Section >>

Results!

Results! 
Ñall "Results of estimation" menu
 
    Section >>

s_

s_ 
Prefix of string
 
    Details >>

Scatter!

Scatter! 
Scatter plot
scatter! <X-axis variable> <list of Y-axis variables>
 
    Section >>

SEigen!

SEigen! 
Eigenvalues and eigenvectors of a symmetric matrix
seigen! <input matrix> <name of eigenvalues matrix> <name of eigenvectors matrix>
 
    Section >>

SetPref!

SetPref! 
Set preferences
setpref! <preferences path> <value>
 
    Section >>

SetSeed!

SetSeed! 
Sets seed for random number generator
setseed! <integer>
 
    Section >>

ShowPref!

ShowPref! 
Show preferences
showpref! <preferences path>
 
    Section >>

ShowTimer!

ShowTimer! 
Shows timer value
 
    Details >>

Silent!

Silent! 
Macro make pauses to show results
 
    Section >>

SimAnnPrefs!

SimAnnPrefs! 
Simulated annealing algorithm preferences
simannprefs! <options>
 
    Details >>

SML!

SML! 
Simulated maximum likelihood
sml! <variable> : <formula>

Spectrogram!

Spectrogram! 
Spectrogram
spectrogram! <variable>
 
    Details >>

Spectrum!

Spectrum! 
Spectral density estimate
spectrum! <variable>
 
    Details >>

Spline!

Spline! 
Cubic spline
spline! <dependent variable> : <explanatory variable>
 
    Details >>
 
    Section >>

StartTimer!

StartTimer! 
Start timer
 
    Details >>

SVD!

SVD! 
Singular value decomposition of a matrix: A=U.diag(S).V'
svd! <input matrix> <name of U matrix> <name of S matrix> <name of V matrix>
 
    Section >>

Text!

Text! 
Message in macros (without pause)
text! <string expression>
 
    Details >>

TimePlot!

TimePlot! 
Plot. Observation time as X-axis variable
timeplot! <list of Y-axis variables>
 
    Section >>

Tobit!

Tobit! 
Tobit
tobit! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

TruncReg!

TruncReg! 
Regression with truncated dependent variable
truncreg! <dependent variable> : <list of regressors>
 
    Details >>
 
    Section >>

VAR!

VAR! 
Vector autoregression
var! <list of endogenous variables> : <list of exogenous variables>
 
    Details >>
 
    Section >>

Verbose!

Verbose! 
Macro works without pauses
 
    Section >>

Wait!

Wait! 
Message in macros (with pause)
wait! <string expression>
 
    Details >>

XYPlot!

XYPlot! 
XY-plot
xyplot! <X-axis variable> <list of Y-axis variables>
 
    Section >>

Matrix decomposition commands

List of matrix decomposition commands.
 
ldu! 
LDU decomposition of a matrix
svd! 
Singular value decomposition of a matrix: A=U.diag(S).V'
seigen! 
Eigenvalues and eigenvectors of a symmetric matrix

    See also 
   Commands

Other commands

clear! 
Delete temporary matrices
delete! 
Delete matrices (variables)
rename! 
Rename matrix or variable
copy! 
Copy matrix or variable
namevars! 
Rename all variables of a matrix
edit! 
Open matrix in table editor
results! 
Ñall "Results of estimation" menu
esttable! 
Ñall "Estimates and statistics" panel
print! 
Write string to file
list! 
Write matrix to file
external! 
Run external file
logfile! 
Set log file
autolog! 
Set auto log file
verbose! 
Macro works without pauses
silent! 
Macro make pauses to show results
showpref! 
Show preferences
setpref! 
Set preferences

    See also 
   Commands 
   Macros (blocks of commands) 
   Command window

Other statistical and mathematical commands

acovfilter! 
Autocovariance filter
armamm! 
ARMA coefficients from autocovariances (method of moments for ARMA)
fplot! 
Function plot
plot3d! 
3D plot based on matrix
binning! 
Binning

    See also 
   Commands 
   Statistical procedures
Hosted by uCoz