TO MY BLOG !!!

martes, 8 de octubre de 2013

PRACTICE: PRESENT PERFECT







1. We have constructed this entire model based on data types.
2. In the 1970s, Kay's Smalltalk work had influenced the Lisp community to incorporate object-based techniques.
3. Object-oriented features have been added to many existing languages during that time, including Ada, BASIC, Fortran, Pascal, and others.
4. More recently, a number of languages have emerged that are primarily object-oriented yet compatible with procedural methodology.
5. Languages that are historically procedural languages have been extended with some OO features.
6. A tag contains the index of the datum in main memory that has been in the cache.





jueves, 19 de septiembre de 2013

PRACTICE: Presente simple COMPUTER LANGUAGE SINTAX

COMPUTER LANGUAGE SINTAX
Every spoken language has a general set of rules for how words and sentences should be structured. These rules are collectively known as the language syntax. In computer programming, syntax serves the same purpose, defining how declarations, functions, commands, and other statements should be arranged.
Many computer programming languages share similar syntax rules, while others have a unique syntax design. For example, C and Java use a similar syntax, while Perl has many characteristics that are not seen in either the C or Java languages.
A program's source code must have correct syntax in order to compile correctly and be made into a program. In fact, it must have perfect syntax, or the program will fail to compile and produce a "syntax error." A syntax error can be as simple as a missing parenthesis or a forgotten semicolon at the end of a statement. Even these small errors will keep the source code from compiling.
Fortunately, most integrated development environments (IDEs) include a parser, which detects syntax errors within the source code. Modern parsers can even highlight syntax errors before a program is compiled, making it easy for the programmer to locate and fix them.
NOTE: Syntax errors are also called compile-time errors, since they can prevent a program from compiliing. Errors that occur in a program after it has been compiled are called runtime errors, since they occur when the program is running.

SOURCE CODE
Every computer program is written in a programming language, such as Java, C/C++, or Perl. These programs include anywhere from a few lines to millions of lines of text, called source code.
Source code, often referred to as simply the "source" of a program, contains variable declarations, instructions, functions, loops, and other statements that tell the program how to function. Programmers may also add comments to their source code that explain sections of the code. These comments help other programmers gain at least some idea of what the source code does without requiring hours to decipher it. Comments can be helpful to the original programmer as well if many months or years have gone by since the code was written.
Short programs called scripts can be run directly from the source code using a scripting engine, such as a VBScript or PHP engine. Most large programs, however, require that the source code first be compiled, which translates the code into a language the computer can understand. When changes are made to the source code of these programs, they must be recomplied in order for the changes to take effect in the program.
Small programs may use only one source code file, while larger programs may reference hundreds or even thousands of files. Having multiple source files helps organize the program into different sections. Having one file that contains every variable and function can make it difficult to locate specific sections of the code. Regardless of how many source code files are used to create a program, you will most likely not see any of the original files on your computer. This is because they are all combined into one program file, or application, when they are compiled.

jueves, 5 de septiembre de 2013

READING COMPREHENSION: THE PARADIGMS OF PROGRAMMING

The Paradigms of Programming

                  A familiar example of a paradigm of programming is the technique of structured programming, which appears to be the dominant paradigm in most current treatments of programming methodology. Structured programming, as formulated by Dijkstra [6], Wirth [27, 29], and Parnas [21], among others, consists of two phases.
In the first phase, that of top-down design, or stepwise refinement, the problem is decomposed into a very small number of simpler subproblems. In programming the solution of simultaneous linear equations, say, the first level of decomposition would be into a stage of triangularizing the equations and a following stage of back-substitution in the triangularized system. This gradual decomposition is continued until the subproblems that arise are simple enough to cope with directly. In the simultaneous equation example, the back substitution process would be further decomposed as a backwards iteration of a process which finds and stores the value of the ith variable from the ith equation. Yet further decomposition would yield a fully detailed algorithm.
                  The second phase of the structured programming paradigm entails working upward from the concrete objects and functions of the underlying machine to the more abstract objects and functions used throughout the modules produced by the top-down design. In the linear equation example, if the coefficients of the equations are rational functions of one variable, we might first design a multiple-precision arithmetic representation and procedures, then, building upon them, a polynomial representation with its own arithmetic procedures, etc. This approach is referred to as the method of levels of abstraction, or of information hiding.
                Other high level paradigms of a more specialized type, such as branch-and-bound [17, 20] or divide-and-conquer [1, 11] techniques, continue to be essential. Yet the paradigm of structured programming does serve to extend one's powers of design, allowing the construction of programs that are too complicated to be designed efficiently and reliably without methodological support.
 
 
                        Source: Floyd, R. W. (1979). "The paradigms of programming". Communications of the ACM 22 (8): 455.

lunes, 12 de agosto de 2013

SIMPLE PRESENT TENSE ASSOCIATIVE ARRAY

Associative array
An associative array (also associative container, map, mapping, dictionary, finite map, and in query-processing an index or index file) is an abstract data type composed of a collection of unique keys and a collection of values, where each key is associated with one value (or set of values). The operation of finding the value associated with a key is called a lookup or indexing, and this is the most important operation supported by an associative array. The relationship between a key and its value is sometimes called a mapping or binding. For example, if the value associated with the key "bob" is 7, we say that our array maps "bob" to 7. Associative arrays are very closely related to the mathematical concept of a function with a finite domain. As a consequence, a common and important use of associative arrays is in memoization.
From the perspective of a computer programmer, an associative array can be viewed as a generalization of an array. While a regular array maps an integer key (index) to a value of arbitrary data type, an associative array's keys can also be arbitrarily typed. In some programming languages, such as Python, the keys of an associative array do not even need to be of the same type.
Content-addressable memory (CAM) systems use a special type of computer memory to improve the performance of lookups in associative arrays and are used in specialized applications. Several supercomputers from the 1970s implemented CAM directly in hardware, and were known as associative computers.
Data structures for representing
Associative arrays are usually used when lookup is the most frequent operation. For this reason, implementations are usually designed to allow speedy lookup, at the expense of slower insertion and a larger storage footprint than other data structures (such as association lists).
Efficient representations
There are two main efficient data structures used to represent associative arrays, the hash table and the self-balancing binary search tree (such as a red-black tree or an AVL tree). Skip lists are also an alternative, though relatively new and not as widely used. B-trees (and variants) can also be used, and are commonly used when the associative array is too large to reside entirely in memory, for instance in a simple database.

miércoles, 31 de julio de 2013

PASSIVE VERBS - SIMPLE PRESENT -- TRANSLATE THE FOLLOWING TEXT:

Data structure
In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.
Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts of data, such as large databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.
Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address — a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as in XOR linking).

lunes, 8 de julio de 2013

PASSIVE VOICE-SIMPLE PRESENT

Data structure

In
computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example,
B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.
Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts of data, such as large
databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.
Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an
address — a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as in XOR linking).