Wednesday, October 30, 2013

Algebra








Algebra
From Wikipedia, the free encyclopedia
  (Redirected from Algebra (mathematics))
Jump to: navigation, search

Three-dimensional right conoid surface plot, described by the elementary algebraic trigonometrical equations
x=v \times \cos(u), y=v \times \sin(u), z=2 \times \sin(u)
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis.
For historical reasons, the word "algebra" has several related meanings in mathematics, as a single word or with qualifiers.
The adjective "algebraic" usually means relation to algebra, as in "algebraic structure". For historical reasons, it may also mean relation with the roots of polynomial equations, like in algebraic number, algebraic extension or algebraic expression.

Algebra as a branch of mathematics

Algebra can essentially be considered as doing computations similar to that of arithmetic with non-numerical mathematical objects.[1] Initially, these objects represented either numbers that were not yet known (unknowns) or unspecified numbers (indeterminate or parameter), allowing one to state and prove properties that are true no matter which numbers are involved. For example, in the quadratic equation
ax^2+bx+c=0,
a, b, c are indeterminates and x is the unknown. Solving this equation amounts to computing with the variables to express the unknowns in terms of the indeterminates. Then, substituting any numbers for the indeterminates, gives the solution of a particular equation after a simple arithmetic computation.
As it developed, algebra was extended to other non-numerical objects, like vectors, matrices or polynomials. Then, the structural properties of these non-numerical objects were abstracted to define algebraic structures like groups, rings, fields and algebras.
Before the 16th century, mathematics was divided into only two subfields, arithmetic and geometry. Even though some methods, which had been developed much earlier, may be considered nowadays as algebra, the emergence of algebra and, soon thereafter, of infinitesimal calculus as subfields of mathematics only dates from 16th or 17th century. From the second half of 19th century on, many new fields of mathematics appeared, some of them included in algebra, either totally or partially.
It follows that algebra, instead of being a true branch of mathematics, appears nowadays, to be a collection of branches sharing common methods. This is clearly seen in the Mathematics Subject Classification[2] where none of the first level areas (two digit entries) is called algebra. In fact, algebra is, roughly speaking, the union of sections 08-General algebraic systems, 12-Field theory and polynomials, 13-Commutative algebra, 15-Linear and multilinear algebra; matrix theory, 16-Associative rings and algebras, 17-Nonassociative rings and algebras, 18-Category theory; homological algebra, 19-K-theory and 20-Group theory. Some other first level areas may be considered to belong partially to algebra, like 11-Number theory (mainly for algebraic number theory) and 14-Algebraic geometry.
Elementary algebra is the part of algebra that is usually taught in elementary courses of mathematics.
Abstract algebra is a name usually given to the study of the algebraic structures themselves.

History

The start of algebra as an area of mathematics may be dated to the end of 16th century, with François Viète's work. Nevertheless some earlier works may be considered as algebra and constitute the prehistory of algebra.

Prehistory of algebra

The roots of algebra can be traced to the ancient Babylonians,[3] who developed an advanced arithmetical system with which they were able to do calculations in an algorithmic fashion. The Babylonians developed formulas to calculate solutions for problems typically solved today by using linear equations, quadratic equations, and indeterminate linear equations. By contrast, most Egyptians of this era, as well as Greek and Chinese mathematics in the 1st millennium BC, usually solved such equations by geometric methods, such as those described in the Rhind Mathematical Papyrus, Euclid's Elements, and The Nine Chapters on the Mathematical Art. The geometric work of the Greeks, typified in the Elements, provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations, although this would not be realized until mathematics developed in medieval Islam.[4]
By the time of Plato, Greek mathematics had undergone a drastic change. The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them.[1] Diophantus (3rd century AD), sometimes called "the father of algebra", was an Alexandrian Greek mathematician and the author of a series of books called Arithmetica. These texts deal with solving algebraic equations.[5]
The word algebra comes from the Arabic language (الجبر al-jabr "restoration") and much of its methods from Arabic/Islamic mathematics. Earlier traditions discussed above had a direct influence on Muhammad ibn Mūsā al-Khwārizmī (c. 780–850). He later wrote The Compendious Book on Calculation by Completion and Balancing, which established algebra as a mathematical discipline that is independent of geometry and arithmetic.[6]
The Hellenistic mathematicians Hero of Alexandria and Diophantus [7] as well as Indian mathematicians such as Brahmagupta continued the traditions of Egypt and Babylon, though Diophantus' Arithmetica and Brahmagupta's Brahmasphutasiddhanta are on a higher level.[8] For example, the first complete arithmetic solution (including zero and negative solutions) to quadratic equations was described by Brahmagupta in his book Brahmasphutasiddhanta. Later, Arabic and Muslim mathematicians developed algebraic methods to a much higher degree of sophistication. Although Diophantus and the Babylonians used mostly special ad hoc methods to solve equations, Al-Khwarizmi contribution was fundamental. He solved linear and quadratic equations without algebraic symbolism, negative numbers or zero, thus he has to distinguish several types of equations.[9]
The Greek mathematician Diophantus has traditionally been known as the "father of algebra" but in more recent times there is much debate over whether al-Khwarizmi, who founded the discipline of al-jabr, deserves that title instead.[10] Those who support Diophantus point to the fact that the algebra found in Al-Jabr is slightly more elementary than the algebra found in Arithmetica and that Arithmetica is syncopated while Al-Jabr is fully rhetorical.[11] Those who support Al-Khwarizmi point to the fact that he introduced the methods of "reduction" and "balancing" (the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation) which the term al-jabr originally referred to,[12] and that he gave an exhaustive explanation of solving quadratic equations,[13] supported by geometric proofs, while treating algebra as an independent discipline in its own right.[14] His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study". He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems".[15]
The Persian mathematician Omar Khayyam is credited with identifying the foundations of algebraic geometry and found the general geometric solution of the cubic equation. Another Persian mathematician, Sharaf al-Dīn al-Tūsī, found algebraic and numerical solutions to various cases of cubic equations.[16] He also developed the concept of a function.[17] The Indian mathematicians Mahavira and Bhaskara II, the Persian mathematician Al-Karaji,[18] and the Chinese mathematician Zhu Shijie, solved various cases of cubic, quartic, quintic and higher-order polynomial equations using numerical methods. In the 13th century, the solution of a cubic equation by Fibonacci is representative of the beginning of a revival in European algebra. As the Islamic world was declining, the European world was ascending. And it is here that algebra was further developed.

History of algebra


In 1545, the Italian mathematician Girolamo Cardano published Ars magna -The great art, a 40-chapter masterpiece in which he gave for the first time a method for solving the general cubic and quartic equations.
François Viète's work at the close of the 16th century marks the start of the classical discipline of algebra. In 1637, René Descartes published La Géométrie, inventing analytic geometry and introducing modern algebraic notation. Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a determinant was developed by Japanese mathematician Kowa Seki in the 17th century, followed independently by Gottfried Leibniz ten years later, for the purpose of solving systems of simultaneous linear equations using matrices. Gabriel Cramer also did some work on matrices and determinants in the 18th century. Permutations were studied by Joseph-Louis Lagrange in his 1770 paper Réflexions sur la résolution algébrique des équations devoted to solutions of algebraic equations, in which he introduced Lagrange resolvents. Paolo Ruffini was the first person to develop the theory of permutation groups, and like his predecessors, also in the context of solving algebraic equations.
Abstract algebra was developed in the 19th century, deriving from the interest in solving equations, initially focusing on what is now called Galois theory, and on constructibility issues.[19] The "modern algebra" has deep nineteenth-century roots in the work, for example, of Richard Dedekind and Leopold Kronecker and profound interconnections with other branches of mathematics such as algebraic number theory and algebraic geometry.[20] George Peacock was the founder of axiomatic thinking in arithmetic and algebra. Augustus De Morgan discovered relation algebra in his Syllabus of a Proposed System of Logic. Josiah Willard Gibbs developed an algebra of vectors in three-dimensional space, and Arthur Cayley developed an algebra of matrices (this is a noncommutative algebra).[21]

Topics containing the word "algebra"

Areas of mathematics:
Many mathematical structures are called algebras.

Elementary algebra


Algebraic expression notation: 1 - Power (exponent), 2 - Coefficient, 3 - term, 4 - operator, 5 - constant term, x y c - variables/constants
Elementary algebra is the most basic form of algebra. It is taught to students who are presumed to have no knowledge of mathematics beyond the basic principles of arithmetic. In arithmetic, only numbers and their arithmetical operations (such as +, −, ×, ÷) occur. In algebra, numbers are often denoted by symbols (such as a, n, x, y or z). This is useful because:
  • It allows the general formulation of arithmetical laws (such as a + b = b + a for all a and b), and thus is the first step to a systematic exploration of the properties of the real number system.
  • It allows the reference to "unknown" numbers, the formulation of equations and the study of how to solve these. (For instance, "Find a number x such that 3x + 1 = 10" or going a bit further "Find a number x such that ax + b = c". This step leads to the conclusion that it is not the nature of the specific numbers that allows us to solve it, but that of the operations involved.)
  • It allows the formulation of functional relationships. (For instance, "If you sell x tickets, then your profit will be 3x − 10 dollars, or f(x) = 3x − 10, where f is the function, and x is the number to which the function is applied".)

Polynomials


The graph of a polynomial function of degree 3.
A polynomial is an expression that is the sum of a finite number of non-zero terms, each term consisting of the product of a constant and a finite number of variables raised to whole number powers. For example, x2 + 2x − 3 is a polynomial in the single variable x. A polynomial expression is an expression that may be rewritten as a polynomial, by using commutativity, associativity and distributivity of addition and multiplication. For example, (x − 1)(x + 3) is a polynomial expression, that, properly speaking, is not a polynomial. A polynomial function is a function that is defined by a polynomial, or, equivalently, by a polynomial expression. The two preceding examples define the same polynomial function.
Two important and related problems in algebra are the factorization of polynomials, that is, expressing a given polynomial as a product of other polynomials that can not be factored any further, and the computation of polynomial greatest common divisors. The example polynomial above can be factored as (x − 1)(x + 3). A related class of problems is finding algebraic expressions for the roots of a polynomial in a single variable.

Teaching algebra

It has been suggested that elementary algebra should be taught as young as eleven years old,[22] though in recent years it is more common for public lessons to begin at the eighth grade level (≈ 13 y.o. ±) in the United States.[23]
Since 1997, Virginia Tech and some other universities have begun using a personalized model of teaching algebra that combines instant feedback from specialized computer software with one-on-one and small group tutoring, which has reduced costs and increased student achievement.[24]

Abstract algebra

Abstract algebra extends the familiar concepts found in elementary algebra and arithmetic of numbers to more general concepts. Here are listed fundamental concepts in abstract algebra.
Sets: Rather than just considering the different types of numbers, abstract algebra deals with the more general concept of sets: a collection of all objects (called elements) selected by property specific for the set. All collections of the familiar types of numbers are sets. Other examples of sets include the set of all two-by-two matrices, the set of all second-degree polynomials (ax2 + bx + c), the set of all two dimensional vectors in the plane, and the various finite groups such as the cyclic groups, which are the groups of integers modulo n. Set theory is a branch of logic and not technically a branch of algebra.
Binary operations: The notion of addition (+) is abstracted to give a binary operation, ∗ say. The notion of binary operation is meaningless without the set on which the operation is defined. For two elements a and b in a set S, ab is another element in the set; this condition is called closure. Addition (+), subtraction (-), multiplication (×), and division (÷) can be binary operations when defined on different sets, as are addition and multiplication of matrices, vectors, and polynomials.
Identity elements: The numbers zero and one are abstracted to give the notion of an identity element for an operation. Zero is the identity element for addition and one is the identity element for multiplication. For a general binary operator ∗ the identity element e must satisfy ae = a and ea = a. This holds for addition as a + 0 = a and 0 + a = a and multiplication a × 1 = a and 1 × a = a. Not all sets and operator combinations have an identity element; for example, the set of positive natural numbers (1, 2, 3, ...) has no identity element for addition.
Inverse elements: The negative numbers give rise to the concept of inverse elements. For addition, the inverse of a is written −a, and for multiplication the inverse is written a−1. A general two-sided inverse element a−1 satisfies the property that aa−1 = 1 and a−1a = 1 .
Associativity: Addition of integers has a property called associativity. That is, the grouping of the numbers to be added does not affect the sum. For example: (2 + 3) + 4 = 2 + (3 + 4). In general, this becomes (ab) ∗ c = a ∗ (bc). This property is shared by most binary operations, but not subtraction or division or octonion multiplication.
Commutativity: Addition and multiplication of real numbers are both commutative. That is, the order of the numbers does not affect the result. For example: 2 + 3 = 3 + 2. In general, this becomes ab = ba. This property does not hold for all binary operations. For example, matrix multiplication and quaternion multiplication are both non-commutative.

Groups

Combining the above concepts gives one of the most important structures in mathematics: a group. A group is a combination of a set S and a single binary operation ∗, defined in any way you choose, but with the following properties:
  • An identity element e exists, such that for every member a of S, ea and ae are both identical to a.
  • Every element has an inverse: for every member a of S, there exists a member a−1 such that aa−1 and a−1a are both identical to the identity element.
  • The operation is associative: if a, b and c are members of S, then (ab) ∗ c is identical to a ∗ (bc).


If a group is also commutative—that is, for any two members a and b of S, ab is identical to ba—then the group is said to be abelian.
For example, the set of integers under the operation of addition is a group. In this group, the identity element is 0 and the inverse of any element a is its negation, −a. The associativity requirement is met, because for any integers a, b and c, (a + b) + c = a + (b + c)
The nonzero rational numbers form a group under multiplication. Here, the identity element is 1, since 1 × a = a × 1 = a for any rational number a. The inverse of a is 1/a, since a × 1/a = 1.
The integers under the multiplication operation, however, do not form a group. This is because, in general, the multiplicative inverse of an integer is not an integer. For example, 4 is an integer, but its multiplicative inverse is ¼, which is not an integer.
The theory of groups is studied in group theory. A major result in this theory is the classification of finite simple groups, mostly published between about 1955 and 1983, which separates the finite simple groups into roughly 30 basic types.
Semigroups, quasigroups, and monoids are structures similar to groups, but more general. They comprise a set and a closed binary operation, but do not necessarily satisfy the other conditions. A semigroup has an associative binary operation, but might not have an identity element. A monoid is a semigroup which does have an identity but might not have an inverse for every element. A quasigroup satisfies a requirement that any element can be turned into any other by either a unique left-multiplication or right-multiplication; however the binary operation might not be associative.
All groups are monoids, and all monoids are semigroups.































Rings and fields





















































Main articles: ring (mathematics) and field (mathematics)

Groups just have one binary operation. To fully explain the behaviour of the different types of numbers, structures with two operators need to be studied. The most important of these are rings, and fields.
A ring has two binary operations (+) and (×), with × distributive over +. Under the first operator (+) it forms an abelian group. Under the second operator (×) it is associative, but it does not need to have identity, or inverse, so division is not required. The additive (+) identity element is written as 0 and the additive inverse of a is written as −a.
Distributivity generalises the distributive law for numbers, and specifies the order in which the operators should be applied, (called the precedence). For the integers (a + b) × c = a × c + b × c and c × (a + b) = c × a + c × b, and × is said to be distributive over +.
The integers are an example of a ring. The integers have additional properties which make it an integral domain.
A field is a ring with the additional property that all the elements excluding 0 form an abelian group under ×. The multiplicative (×) identity is written as 1 and the multiplicative inverse of a is written as a−1.
The rational numbers, the real numbers and the complex numbers are all examples of fields.

Thursday, August 22, 2013

Probability and statistics symbols



Statistical Symbols
Probability and statistics symbols table and definitions.

Probability and statistics symbols table





Symbol Symbol Name Meaning / definition   Example
P(A) probability function probability of event A P(A) = 0.5
P(AB) probability of events intersection probability that of events A and B P(AB) = 0.5
P(A B) probability of events union probability that of events A or B P(AB) = 0.5
P(A | B) conditional probability function probability of event A given event B occured P(A | B) = 0.3
f (x) probability density function (pdf) P(a x b) = ∫ f (x) dx
F(x) cumulative distribution function (cdf) F(x) = P(X x)
μ population mean mean of population values μ = 10
E(X) expectation value expected value of random variable X E(X) = 10
E(X | Y) conditional expectation expected value of random variable X given Y E(X | Y=2) = 5
var(X) variance variance of random variable X var(X) = 4
σ2 variance variance of population values σ2 = 4
std(X) standard deviation standard deviation of random variable X std(X) = 2
σX standard deviation standard deviation value of random variable X σX  = 2
median middle value of random variable x
cov(X,Y) covariance covariance of random variables X and Y cov(X,Y) = 4
corr(X,Y) correlation correlation of random variables X and Y corr(X,Y) = 0.6
ρX,Y correlation correlation of random variables X and Y ρX,Y = 0.6
summation summation - sum of all values in range of series
∑∑ double summation double summation
Mo mode value that occurs most frequently in population
MR mid-range
MR = (xmax+xmin)/2
Md sample median half the population is below this value
Q1 lower / first quartile 25% of population are below this value
Q2 median / second quartile 50% of population are below this value = median of samples
Q3 upper / third quartile 75% of population are below this value
x sample mean average / arithmetic mean x = (2+5+9) / 3 = 5.333
s 2 sample variance population samples variance estimator s 2 = 4
s sample standard deviation population samples standard deviation estimator s = 2
zx standard score
zx = (x-x) / sx


X ~
distribution of X
distribution of random variable X
X ~ N(0,3)
N(μ,σ2)
normal distribution
gaussian distribution
X ~ N(0,3)
U(a,b)
uniform distribution
equal probability in range a,b 
X ~ U(0,3)
exp(λ)
exponential distribution
f (x) = λe-λx , x≥0

gamma(c, λ)
gamma distribution
f (x) = λ c xc-1e-λx / Γ(c), x≥0

χ 2(k)
chi-square distribution
f (x) = xk/2-1e-x/2 / ( 2k/2 Γ(k/2) )

F (k1, k2)
F distribution


Bin(n,p)
binomial distribution
f (k) = nCk pk(1-p)n-k

Poisson(λ)
Poisson distribution
f (k) = λke-λ / k!

Geom(p)
geometric distribution
f (k) =  p (1-p) k

HG(N,K,n)
hyper-geometric distribution


Bern(p)
Bernoulli distribution



Combinatorics Symbols

Symbol Symbol Name Meaning / definition Example
n! factorial n! = 1·2·3·...·n 5! = 1·2·3·4·5 = 120
nPk permutation _{n}P_{k}=\frac{n!}{(n-k)!} 5P3 = 5! / (5-3)! = 60
nCk
combination _{n}C_{k}=\binom{n}{k}=\frac{n!}{k!(n-k)!} 5C3 = 5!/[3!(5-3)!]=10

Statistical Symbols

Statistical Symbols
If you have a recommendation for additional term(s), please let us know via email at ourcourses@statistics.com. We are happy to help you with your statistical knowledge. Click on the link below to expand the table.

Alphabetical Statistical Symbols


Symbol Text Equivalent Meaning Formula Link to Glossary (if appropriate)
a
Y- intercept of least square regression line , for line Regression: y on x
b
Slope of least squares regression line
for line
Regression: y on x
Binomial distribution with parameters n and p Discrete probability distribution for the probability of number of successes in n independent random trials under the identical conditions. If X follows then, ,
Where, ,
r = 0,1,2, …….,n,

Binomial Distribution
c
Confidence level Confidence interval
nCr n-c-r Combinations (number of combinations of n objects taken r at a time) , where
n-c-r Combinations (number of combinations of n objects taken r at a time) , where
Covariance between X and Y Covariance between X & Y
CV
Coefficient of variation
df
Degree(s) of freedom

E
Maximal error tolerance for large samples
E (f(x)) Expected value of f(x)

f
Frequency number of times score.
F
F-distribution variable where n1 and n2 are the
corresponding degrees of freedom.

F-distribution, Hypothesis testing for equality of 2 variances.

or

Distribution function .
f(x)
or


Probability mass function

Depends on the distribution.

H0 H-naught Null hypothesis The null hypothesis is the hypothesis about the population parameter. Testing of hypothesis
H1 H-one Alternate hypothesis An alternate hypothesis is constructed in such a way that it is the one to be accepted when the null hypothesis must be rejected. Testing of hypothesis
IQR
Interquartile range Measures of central tendency.
MS M-S Mean square Analysis of variance (ANOVA)
n
Sample size. number of units in a sample.
N
Population size Number of units in the population.
n-p-r Permutation (number of ways to arrange in order n distinct objects taking them r at a time) where
n-p-r Permutation (number of ways to arrange in order n distinct objects taking them r at a time) , where
p-hat Sample proportion Binomial distribution
Probability of A given B Conditional probability
Probability of x Probability of x
p-value
The attained level of significance. P value is the smallest level of significance for which the observed sample statistic tells us to reject the null hypothesis.
Q
Probability of not happening of the event
Q1 Q-one First quartile Median of the lower half of the data that is data below median. Measures of central tendency
Q2 Q-two Second quartile Or Median Central value of an ordered data. Measures of central tendency
Q3 Q-three Third quartile Median of the upper half of the data that is data above the median. Measures of central tendency
R
Sample Correlation coefficient
r2 r-square Coefficient of determination
R2 r-square Multiple correlation coefficient
S
Sample standard deviation for ungrouped data. for grouped data. Measures of dispersion
S2 S-square Sample variance for ungrouped data.
for grouped data.
Measures of dispersion
s-e- square Error variance
SD
Sample Standard deviation for ungrouped data. for grouped data.

Bowley’s coefficient of skewness Measures of skew ness

Pearson’s coefficient of skewness Measures of skew ness

Sum of squares for ungrouped data
for grouped data

t
Student’s t variable t-distribution
tc t critical The critical value for a confidence level c. Number such that the area under the t distribution for a given number of degrees of freedom falling between and is equal to c. Testing of hypothesis
Var(X) Variance of X Variance of X

X
Independent variable or explanatory variable in regression analysis Eg. In the study of, yield obtained & the irrigation level, independent variable is, Irrigation level.
x-bar Arithmetic mean or Average of X scores. for ungrouped data.
for grouped data
Measures of central tendency
y
Dependent variable or response variable in regression analysis Eg. In the study of, yield obtained & the irrigation level, dependent variable is, Yield obtained.
Z Z-score Standard normal variable (Normal variable with mean = 0 & SD = 1) , where X follows Normal . Standard normal distribution
z critical The critical value for a confidence level c. Number such that the area under the standard normal curve falling between and is equal to c. Testing of hypothesis Confidence interval