Quotation

In this paper, we propose a trend factor to capture cross-section stock price trends. In contrast to the popular momentum factor constructed by sorting stocks based on a single criterion of past year performance, we form our trend factor with a cross-section regression approach that makes use of multiple trend indicators containing daily, weekly, monthly and yearly information. We find that the average return on the trend factor is 1.61% per month, more than twice of the momentum factor. The Sharpe ratio is more than twice too. Moreover, during the recent financial crisis, the trend factor earns 1.65% per month while the momentum factor loses 1.33% per month. The trend factor return is robust to a variety of control variables including size, prior month return, book-to-market, idiosyncratic volatility, liquidity, etc., and is greater under greater information uncertainty. In addition, the trend factor explains well the cross-section decile portfolio returns sorted by short-term reversal, momentum, and long-term reversal as well as various price ratios (e.g. E/P), and performs much better than the momentum factor.

The basic idea is to first calculate the month-end price moving average time series of different lags, then regress cross-sectionally monthly returns at date t on all moving average series at date t-1, finally predict monthly returns at date t+1 using the regression estimates and the moving average series at date t. This procedure guarantees we forecast stock returns at t+1 with information set only up to t. We then rank all stocks based on the forecasts into five quintiles, long the quintile with highest forecast returns and short the quintile with lowest, and rebalance once per month. This strategy generates, on average, 1.61% monthly return and 0.29 sharpe ratio using all US stocks, performs especially good during recession, and outperforms several existing factors. Moreover, the good performance of this strategy cannot be explained by firm fundamentals.

I implement this strategy with Chinese stock data, adjust the rebalance frequency to weekly for convenience, and trade in extreme by always long the one stock with the highest forecast return, no short is allowed, stop loss is set at 5%. The result is amazing, it yields an annualized return at 97.15% from March, 2013 to Feb, 2014, with maximum drawdown at 30.01%. The fund curve is as follows (note: I didn't use all Chinese stocks but only 840 stocks in my stock pool with good liquidity, so there is selection bias and please accept the result cautiously...)

Nice shot. It seems to be better than the simple strategy between A-shares and H-shares.

Tags - trend , strategy , china

At the moment there are 84 firms listed at both A (Shanghai and Shenzhen) and H (Hongkong) stock markets, according to the law of one price, the stock prices of these firms should be at similar level. However, there are huge differences, without considering exchange rate (1 RMB = 1.28 HK$), the ratio of the price in A market to the price in H market for a same firm is as low as 52.72% and as high as 617.59% as of 02/03/2014. Is the difference mean reverting? If yes, we would expect the stock traded cheaper in A market to go up, and vice versa. So can we make profit by long the stocks with large differences?

Rigorous statistical method should be undertaken to examine whether the ratio is indeed mean reverting. For simplicity, I construct a trading strategy that each week, I go long at the opening price the stock in A market that has the smallest price ratio of previous week, hold it one week and sell it at the weekly closing price. Short trading is not allowed for individual investor in A market. Stop loss is set arbitrarily at 5%. Transaction cost is 0.18% per trading.

The results for this simple strategy from 02.2013 to 01.2014 are:

Annualized Return 0.2070

Annualized Std Dev 0.2545

Annualized Sharpe 0.8133

Maximum Drawdown

From Trough To Depth Length To Trough Recovery

1 2013-09-13 2013-12-13 -0.1275 19 12 NA

2 2013-08-16 2013-08-23 2013-09-06 -0.0566 4 2 2

3 2013-03-22 2013-04-19 2013-05-03 -0.0488 5 3 2

4 2013-07-12 2013-07-12 2013-07-19 -0.0374 2 1 1

5 2013-05-31 2013-05-31 2013-07-05 -0.0229 6 1 5

The fund curve

Lower line is the return for a buy-and-hold strategy of all 84 firms.

Considering the fact that 2013 is a gloomy year for A market and this strategy is long only, the performance is not bad at all. Comments are welcomed

Tags - strategy , china

Pat at Portfolio Probe recently had a wonderful test on some heuristic optimization methods, including simulated annealing, traditional genetic algorithm, evolutionary algorithms. By using R packages and functions - Rmalschains, GenSA, genopt, DEoptim, soma, rgenoud, GA, NMOF and SANN method of optim, he finds that the Rmalschains and GenSA packages are standing out. Nice one, original article is at A comparison of some heuristic optimization methods.

Tags - r , optimization

Tags - r , week , review

There are sound reasons to believe that CDS spreads keep high in the period of turbulence while stay stably low during most of quiet periods. To investigate if there is possible regime switch phenomenon, I run a three year rolling panel regression using CDSs of over 250 reference entities on several widely accepted explanatory variables including: leverage, volatility, treasury yield and the spread of three month Libor and repo rates, where the last variable is used to proxy liquidity risk. The coefficients for each variable is plotted below

the coefficients of leverage and treasury yields are changing but without clear regime pattern, on the contrary, the volatility, especially the liquidity effects are suggesting there may exist regime switching and the necessity to employ a Markov regime switch model to explain CDS spreads.

PS: a matlab markov regime switching package can be found here; the panel regression is done with the R package PLM at http://cran.r-project.org/web/packages/plm/vignettes/plm.pdf

Tags - regime , cds

Check yourself a demo video at http://rcom.univie.ac.at/RExcelDemo/

The package can be downloaded free at http://rcom.univie.ac.at/download.html

Tags - r , excel

file.exists() and unlink() have more support for files > 2GB;

A few more file operations will now work with >2GB files.

which could help us to relieve the worry of handling large datasets in R. An exciting post shows us how to speed up R code up to 4 times by using the new R compiler package.

http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Does-R-run-under-Windows-Vista_003f

http://www.r-statistics.com/2011/04/how-to-upgrade-r-on-windows-7/

http://stackoverflow.com/questions/1401904/painless-way-to-install-a-new-version-of-r

Tags - r

Simplifed codes are listed below:

for (i in 1:N) { #N could be a large number, AdjS and VarS are initially given and updated for each i

PredS <- F %*% AdjS

PredY <- H %*% PredS

PredError <- (Y[i,] - t(PredY))

VarY <- (H %*% VarS) %*% t(H)

InvVarY <- solve(VarY)

KG <- (VarS %*% t(H)) %*% InvVarY

AdjS <- PredS + PredError

VarS <- (diag(3) - KG %*% H) %*% VarS

ll[i] <- PredError %*% InvVarY %*% t(PredError)

}

PredS <- F %*% AdjS

PredY <- H %*% PredS

PredError <- (Y[i,] - t(PredY))

VarY <- (H %*% VarS) %*% t(H)

InvVarY <- solve(VarY)

KG <- (VarS %*% t(H)) %*% InvVarY

AdjS <- PredS + PredError

VarS <- (diag(3) - KG %*% H) %*% VarS

ll[i] <- PredError %*% InvVarY %*% t(PredError)

}

The point is how to vectorize the for loop while allowing AdjS and VarS to be updated. I appreciate your help.

Tags - r , vectorize

I am eager to make my hands dirty after the release of R client library for the Google Prediction API @ http://code.google.com/p/google-prediction-api-r-client/, however, as both my office's computer and my laptop are under Windows, there are few issues for the R API package:

1, the original usage example had mistake. I always had a problem when I wanted to train my local data, my.model <- PredictionApiTrain(data="MYPATH/MYFILE.csv"), it says

At the beginning of PredictionApiTrain() there are lines:

if (missing(remote.file) || !is.character(remote.file))

stop("'remote.file' should be character")

stop("'remote.file' should be character")

However, remote.file is an argument without default value, function (data, remote.file, verbose = FALSE). Now the example has been changed to my.model <- PredictionApiTrain (data="MYPATH/MYFILE.csv",remote.file="gs://MYBUCKET/MYOBJECT");

2, even when I use PredictionApiTrain

(data="MYPATH/MYFILE.csv",remote.file="gs://MYBUCKET/MYOBJECT"),

trained model my.model is expected, however, R returns

I did install gsutil as described at http://code.google.com/apis/storage/docs/gsutil.html, and can list my bucket using: python gsutil ls gs://MYBUCKET/MYOBJECT.

The comment I got this afternoon is:

Quotation

We recommend to run this R client under Linux since this version doesn't support gsutil under Windows using Python.

CygWin is a more recommended environment for running gsutil under Windows. And we will add this feature in future release soon.

CygWin is a more recommended environment for running gsutil under Windows. And we will add this feature in future release soon.

It seems we have to wait for a while before testing this promising application, luckily the ball is moving forward. Please share with us if you find a way out. I am looking forward to it and will update once I successfully try CygWin.

Tags - r , prediction

I started to play with Python two weeks ago due to the limitation of R in terms of handling large data, then a friend of mine suggested me to try Python since I had to do data massage frequently, "Python is the best choice, trust me", he said. Although I was unwilling to learn another new software, I couldn't bear with the low efficiency of R (or of my work) for large data. You may realize my learning curve as: Excellent free CSV splitter --> MySQL+RMySQL package --> Several R packages including bigmemory and ff. But to be honest, none of them satisfies me either because of the limitation of the method (slow + malfunction) or of my own computer (short of memory).

I am shocked by python's extreme power and easy-to-use design after nearly two weeks, dealing with a 10GB CSV had never become so easy. More importantly, you can access R from Python almost seamlessly with the package RPY. To get started, I would like to recommend the following readings to all Python newbies like me:

1, commands dictionary Matlab vs R vs Python;

2, free ebook Dive Into Python;

3, a text book Machine Learning: An Algorithmic Perspective by Prof. Stephen Marsland.

The third book is especially useful for data analysis, as there are lots of Python code examples in the book, the code and dataset are available to download @ the author's website http://www-ist.massey.ac.nz/smarsland/MLBook.html, take a look before deciding to add it to your shelf.

Tags - python , r

How to proceed efficiently? Below is an excellent presentation on

View more presentations from Ryan Rosario.

BTW, determining the number of rows of a very big file is tricky, you don't have to load the data first and use dim(), which easily leads to short of memory. One way of doing it is readLines(), for example:

data <- gzfile("yourdata.zip",open="r")

MaxRows <- 50000

TotalRows <- 0

while((LeftRow <- length(readLines(data,MaxRows))) > 0 )

TotalRows <- TotalRows+LeftRow

close(data)

MaxRows <- 50000

TotalRows <- 0

while((LeftRow <- length(readLines(data,MaxRows))) > 0 )

TotalRows <- TotalRows+LeftRow

close(data)

Tags - data , csv

Is this really true for corporate bonds? I run a simple regression using R to test my data, where US corporate bonds are downloaded from TRACE (Trade Reporting and Compliance Engine), CDS data from Datastream, Treasury / Swap interest rate from Federal Reserve Bank, the total number of bonds in my sample is 2409 from year 2004 ~ 2010. Liquidity of a corporate bond is measured as in the paper

where alpha1 & alpha2 represent bid & ask spread, respectively, by using maximum likelihood estimation we could estimate the transaction cost alpha2-alpha1 for each bond, obviously the higher the transaction cost, the lower the liquidity.

Then I estimate the long-term liquidity premium by

here y, r, lambda and l are corporate bond yield, risk free rate, credit risk premium and liquidity premium. Finally I rank the corporate bonds by their liquidity premium, and scaled the ranking to be between 0 ~ 1, the higher the number, the lower the liquidity premium. What we would expect is a negative relationship between liquidity & liquidity premium, that is to say, investors would expect a lower liqudity premium by holding a larger liquidity bond, and vice versa.

Below is a simple univariate regression of liquidity on liquidity premium, where swap.liquidity is the liquidity premium estimated using swap rate as risk free. Not only the liquidity premium is significant enough, but also its coefficient sign is intuitive and same as expected

the regression result shows the liquidity premium is highly significant even after controlling for those typical bond characteristics such as coupon rate, issue amount, age, maturity and the rating, where rating is expressed as from 1 to 7, with 1 being the highest rank "AAA". The larger issue amount, the lower transaction cost (and the higher liquidity); also the higher rating, the lower transaction cost. Overall there is a 140 basis point different in transaction costs between the lowest and highest premium.

I think by now we can draw a conclusion based on this empirical analysis:

Tags - bond , liquidity

This post is therefore

The true values are listed in the paper "

I intentionally set the starting values far away from the true ones in order to see which one is able to find closest answers, obviously nlminb outperforms all others and returns almost the true values, I change the starting values randomly and still,

Check several alternative R optimization packages if you are not satisfied.

Tags - r , optimization , filter

I came across an easy-to-use

Amelia II was developed based on R language, so users have to install R before running it, installation of Amelia is staightforward: download and run the exe file, that's it. For me, the beauty of Amelia II is its friendly interface, I don't even need to run R software myself. Double clicking Amelia II shows the following

as you can see from the input and output menus, it supports csv files, simply importing a csv file with missing data returns a csv with imputed data, amazing, isn't it?

Downloading the software and help documents at http://gking.harvard.edu/amelia/.

Tags - r

Suppose you have a matrix of bond data

you are interested in the total amount of bonds of each rating, of each industry, or of each time to maturity, how to proceed? you may be thinking of lapply, sapply or even for loop, that's OK but at the cost of efficiency (coding time & running time) and possible error (personally I often have to modify twice for my sapply code to work, sad...).

It becomes much easier with the

That's it, easy to use, efficient, right? Download the R Reshape package at http://cran.r-project.org/web/packages/reshape/index.html

Tags - r , package

Quotation

suppose I have a large CSV file with over 30 million number of rows, both Matlab / R lacks memory when importing the data. Could you share your way to handle this issue? what I am thinking is:

a) split the file into several pieces (free, straightforward but hard to maintain);

b) use MS SQL/MySQL (have to learn it, MS SQL isn't free, not straightforward).

a) split the file into several pieces (free, straightforward but hard to maintain);

b) use MS SQL/MySQL (have to learn it, MS SQL isn't free, not straightforward).

1, 1) import the large file via "scan" in R;

2) convert to a data.frame --> to keep data formats

3) use cast --> to group data in the most "square" format as possible, this step involves the Reshape package, a very good one.

2, use the bigmemory package to load the data, so in my case, using read.big.matrix() instead of read.table(). There are several other interesting functions in this package, such as mwhich() replacing which() for memory consideration, foreach() instead of for(), etc. How large can this package handle? I don't know, the authors successfully load a CSV with size as large as 11GB.

3, switch to a 64 bit version of R with enough memory and preferably on linux. I can't test this solution at my office due to administration constraint, although it is doable, as mentioned in R help document,

Quotation

64-bit versions of Windows run 32-bit executables under the WOW (Windows on Windows) subsystem: they run in almost exactly the same way as on a 32-bit version of Windows, **except that the address limit for the R process is 4GB (rather than 2GB or perhaps 3GB)**....The disadvantages are that all the pointers are 8 rather than 4 bytes and so small objects are larger and more data has to be moved around, and that far less external software is available for 64-bit versions of the OS.

Search & trial.

Tags - r , csv

FormatR is the package for tidying R source code, although it is less convenient to use than the straightforward shortcuts in Matlab, this package is good enough for me, what is it for? as the title suggest:

Quotation

formatR: format R code automatically, farewell to ugly R code

Below is a comparison before and after using FormatR.

Download the package at http://cran.r-project.org/web/packages/formatR/index.html

Tags - r

However, generating GUI is by no means easy as I know the pain when creating the Matlab-GUI equity derivative calculator. It becomes even worse in R language, to be honest, I hate the graph plotting in R, terribly unflexible compared with in Matlab. Luckily I came across a good R GUI package named "

Very nice indeed, after playing for half an hour, it is simple to use, especially when what you need is just a basic GUI demonstrating to others a rough idea.

For a simple example, I create a European option pricer with Black Scholes formula,

EuropeanOption <- function (s, k, r, t, vol, CallOption) {

d1 <- (log(s/k)+(r+0.5*vol^2)*t)/(vol*sqrt(t))

d2 <- d1-vol*sqrt(t)

if (CallOption){

return (s*pnorm(d1)-k*exp(-r*t)*pnorm(d2))

} else {

return (k*exp(-r*t)*pnorm(-d2)-s*pnorm(-d1))

}

}

d1 <- (log(s/k)+(r+0.5*vol^2)*t)/(vol*sqrt(t))

d2 <- d1-vol*sqrt(t)

if (CallOption){

return (s*pnorm(d1)-k*exp(-r*t)*pnorm(d2))

} else {

return (k*exp(-r*t)*pnorm(-d2)-s*pnorm(-d1))

}

}

res <- gui(EuropeanOption, argOption=list(CallOption=c("TRUE","FALSE")))

It returns a GUI looks like

where you are able to set inputs and get outputs. Nice. More advanced GUI is possible by adding more lines.

Interested readers shall download the package "

Tags - r , gui

As "for loop" is very slow in R, we should try best to avoid using it, and to use vectorization instead. sapply is designed for this, for example, instead of:

for (i in 1:10) {

z[i] <- mean(x[1:i])

}

z[i] <- mean(x[1:i])

}

we could use

z <- sapply(1:10, function(i, x) {mean(x[1:i])}, x)

It went well, but what if besides computing z, I need to update another variable, for example, with loop, it is

temp <- 3

for (i in 1:10) {

x[i] <- x[i]-temp

z[i] <- mean(x[1:i])

temp <- x[i]-z[i]

}

for (i in 1:10) {

x[i] <- x[i]-temp

z[i] <- mean(x[1:i])

temp <- x[i]-z[i]

}

in this case, temp is changing every step (it doesn't have to be a function of z[i]). How to vectorize that and use sapply then? since sapply can't return two variables z and temp.

Many thanks.

the following is a sapply example returning the same result for "for loop" and "sapply".

sapply.example <- function(nsim = 10){

x <- rnorm(nsim)

y <- list()

z.for <- array(0, nsim)

temp <- 3

for (i in 1:nsim) {

x[i] <- x[i]-temp

z.for[i] <- mean(x[1:i])

temp <- x[i]-z.for[i]

}

y$z.for <- z.for

z.sapply <- array(0,2*nsim)

z.sapply[1] <- 3

z.sapply <- sapply(seq(1,2*nsim,by=2), function(i,x,z.sapply) {

temp <- z.sapply[i]

z.temp <- mean(x[1:((i+1)/2)])

temp <- x[((i+1)/2)]-z.temp

z.sapply[i] <- temp

z.sapply[i+1] <- z.temp

z.sapply[i:(i+1)]

}, x, z.sapply, simplify =TRUE)

y$z.sapply <- z.sapply[seq(2,2*nsim, by=2)]

y

}

x <- rnorm(nsim)

y <- list()

z.for <- array(0, nsim)

temp <- 3

for (i in 1:nsim) {

x[i] <- x[i]-temp

z.for[i] <- mean(x[1:i])

temp <- x[i]-z.for[i]

}

y$z.for <- z.for

z.sapply <- array(0,2*nsim)

z.sapply[1] <- 3

z.sapply <- sapply(seq(1,2*nsim,by=2), function(i,x,z.sapply) {

temp <- z.sapply[i]

z.temp <- mean(x[1:((i+1)/2)])

temp <- x[((i+1)/2)]-z.temp

z.sapply[i] <- temp

z.sapply[i+1] <- z.temp

z.sapply[i:(i+1)]

}, x, z.sapply, simplify =TRUE)

y$z.sapply <- z.sapply[seq(2,2*nsim, by=2)]

y

}

Tags - r

One bothering issue is each software has its own coding rules, for example, in Matlab we use a(1,1) but in R we use a[1,1]; in Matlab we have ones(3,2) but in R we dont have such a command but matrix(1,3,2), etc. (I am always wondering why they can't be designed in a similar way). It does bring me trouble sometimes, luckily I came across a web page similar as cheat-sheets, it lists those widely used commands in R and corresponding commands in Matlab, very convenient indeed.

Search before trial & error: http://mathesaurus.sourceforge.net/octave-r.html

Walking Randomly suggests another excellent PDF manual consisting of 47 pages, fantastic! http://www.math.umaine.edu/~hiebeler/comp/matlabR.pdf

Tags - r , matlab

Below is a short plan, help me add it by leaving a comment, thanks.

Tags - r , package , rate

DESCRIPTION <<

R <<< for R function definitions

src <<< for low-level source code (this is optional for other programming files, such as your c++ codes)

data <<< for package datasets (this is optional)

#MonteCarloPi.R -- PI estimation by Monte Carlo simulation

MonteCarloPi <- function(nsim){

x <- 2*runif(nsim)-1

y <- 2*runif(nsim)-1

inx <- which((x^2+y^2)<=1)

return ((length(inx)/nsim)*4)

}

MonteCarloPi <- function(nsim){

x <- 2*runif(nsim)-1

y <- 2*runif(nsim)-1

inx <- which((x^2+y^2)<=1)

return ((length(inx)/nsim)*4)

}

Quotation

Tags - r , package

I came across an easy-to-use

Amelia II was developed based on R language, so users have to install R before running it, installation of Amelia is staightforward: download and run the exe file, that's it. For me, the beauty of Amelia II is its friendly interface, I don't even need to run R software myself. Double clicking Amelia II shows the following

as you can see from the input and output menus, it supports csv files, simply importing a csv file with missing data returns a csv with imputed data, amazing, isn't it?

Downloading the software and help documents at http://gking.harvard.edu/amelia/.

Tags - data , missing

The installation is straintforward, I tried it on my Windows, the source code is at http://cran.r-project.org/web/packages/RQuantLib/index.html, which is self-contained and does not even require a QuantLib (or Boost) installation. Nothing more to say, following the process, users are able to use the library immediately.

So far the function and option types supported by RQuantlib are limited, vanilla and a few popular exotic options, for example, American option, Asian option, Barrier, Bermudan, Binary option, as well as a range of fixed-income functions, mainly on Convertible bond valuation. Hopefully it will grow quickly.

Detailed reference manual is also available at http://cran.r-project.org/web/packages/RQuantLib/index.html.

Tags - quantlib

http://quanttrader.info/public/testForCoint.html

Tags - cointegration

Interested ppl shall download the Splus codes and data at http://www.raffonline.altervista.org/fgd/

PS: to be honest, this is not the area i am family with at all, download at your own risk

Tags - nonparametric

One issue is that of restrictions upon parameters. When the search algorithm is running, it may stumble upon nonsensical values - such as a sigma below 0 - and you do need to think about this. One traditional way to deal with this is to "transform the parameter space". As an example, for all positive values of sigma, log(sigma) ranges from -infinity to +infinity. So it's safe to do an unconstrained search using log(sigma) as the free parameter.

For detail about methodology and sample codes see http://www.mayin.org/ajayshah/KB/R/documents/mle/mle.html.

Tags - mle

# All returns are assumed to be on a monthly scale!

functions including:

# moment.third

# moment.fourth

# CoSkewness

# CoKurtosis

# BetaCoVariance

# BetaCoV (wrapper for BetaCoVariance)

# SystematicBeta (wrapper for BetaCoVariance)

# BetaCoSkewness

# BetaCoS (wrapper for BetaCoSkewness)

# SystematicSkewness (wrapper for BetaCoSkewness)

# BetaCoKurtosis

# BetaCoK (wrapper for BetaCoKurtosis)

# SystematicKurtosis (wrapper for BetaCoKurtosis)

# VaR

# VaR.Beyond

# VaR.column

# VaR.CornishFisher

# VaR.Marginal

# modifiedVaR (wrapper for VaR.CornishFisher)

http://braverock.com/brian/R/extra_moments.R

Tags - moment , portfolio

2. globalMin.portfolio compute global minimum variance portfolio

3. tangency.portfolio compute tangency portfolio

4. efficient.frontier computer Markowitz bullet

http://faculty.washington.edu/ezivot/econ483/portfolio.ssc

Tags - markowitz , splus

I have not tested the package, though, will update later. Here is downloading link: http://cran.r-project.org/web/packages/splus2R/index.html.

Tags - splus , r

R package can be downloaded at http://cran.r-project.org/web/packages/copula/index.html

Tags - copula

Time Series Manipulation, Time Series Concepts, Unit Root Tests, Modeling Extreme Values, Time Series Regression, Univariate GARCH, Long Memory, Rolling Analysis, Systems of Regression Eqations, VAR Models, Cointegration, Factor Models, Term Structure, Copulas, Generalized Method of Moments, etc.

For detail please download at http://faculty.washington.edu/ezivot/MFTS2ndEditionScripts.htm

Tags - s-plus

R-language version can be downloaded at http://cran.r-project.org/web/packages/QRMlib/index.html and S-PLUS library to accompany book is at http://www.ma.hw.ac.uk/~mcneil/book/index.html.

Tags - risk

Rmetrics is the premier open source solution for teaching financial market analysis and valuation of financial instruments. With hundreds of functions build on modern methods Rmetrics combines explorative data analysis, statistical modeling and rapid model prototyping. The Rmetrics Packages are embedded in R building an environment which creates for students a first class system for applications in statistics and finance.

Download at http://cran.cnr.berkeley.edu/web/packages/fOptions/index.html

Tags - r , option

Quotation

Library of econometric functions for performance and risk analysis of financial portfolios. This library aims to aid practitioners and researchers in using the latest research in analysis of both normal and non-normal return streams.

We created this library to include functionality that has been appearing in the academic literature on performance analysis and risk over the past several years, but had no functional equivalent in R. In doing so, we also found it valuable to have wrapper functions for functionality easily replicated in R, so that we could access that functionality using a function with defaults and naming consistent with common usage in the finance literature. The following sections cover Performance Analysis, Risk Analysis (with a separate treatment of VaR), Summary Tables of related statistics, Charts and Graphs, a variety of Wrappers and Utility functions, and some thoughts on work yet to be done.

We created this library to include functionality that has been appearing in the academic literature on performance analysis and risk over the past several years, but had no functional equivalent in R. In doing so, we also found it valuable to have wrapper functions for functionality easily replicated in R, so that we could access that functionality using a function with defaults and naming consistent with common usage in the finance literature. The following sections cover Performance Analysis, Risk Analysis (with a separate treatment of VaR), Summary Tables of related statistics, Charts and Graphs, a variety of Wrappers and Utility functions, and some thoughts on work yet to be done.

http://braverock.com/brian/R/PerformanceAnalytics/html/PerformanceAnalytics-package.html

Tags - econometrics , performance , r

Evolutionary algorithms (EAs) are search methods that take their inspiration from natural selection and survival of the fittest in the biological world. EAs differ from more traditional optimization techniques in that they involve a search from a "population" of solutions, not from a single point. Each iteration of an EA involves a competitive selection that weeds out poor solutions. The solutions with high "fitness" are "recombined" with other solutions by swaping parts of a solution with another. Solutions are also "mutated" by making a small change to a single element of the solution. Recombination and mutation are used to generate new solutions that are biased towards regions of the space for which good solutions have already been seen.

This R package provides the DEoptim function which performs Differential Evolution Optimization (evolutionary algorithm), for detail check http://cran.r-project.org/web/packages/DEoptim/index.html.

wiki(Evolutionary algorithm)

Tags - optimization

http://www.math.tu-berlin.de/~mkeller/index.php?target=rcode

Tags - copula

http://www.math.ku.dk/~rolf/teaching/mfe04/MiscInfo.html#Code

Tags - simulation

R code can be downloaded at http://www.math.ku.dk/~rolf/teaching/mfe04/MiscInfo.html#Code

wiki(Vasicek model)

Tags - vasicek , cox ingersoll ross

http://www.math.tu-berlin.de/~mkeller/index.php?target=rcode

Tags - download , data , option

http://www.econ.uiuc.edu/~roger/research/rq/rq.html

wiki(Quantile regression)

Tags - regression