# Thread Subject: How to compute gradient to give to fminunc?

 Subject: How to compute gradient to give to fminunc? From: Luca Date: 22 Jun, 2012 13:58:07 Message: 1 of 6 Hello! I've read many times about fminunc and how it needs a gradient to work optimally and to use the large-scale algorithm. How can I find it? I mean... If I have to minimize an analytical function I can compute analytically the gradient and that's no problem. What about a function where I can't find an analytical expression for the gradient?? i.e.: image registration. Is there a way to compute the gradient efficiently? I would say no, and therefore that the only feasible option is to let fminunc compute it using finite differences. Am I getting it right??
 Subject: How to compute gradient to give to fminunc? From: Torsten Date: 22 Jun, 2012 14:20:19 Message: 2 of 6 Since you also had to calculate the gradient using a finite difference approximation, you can also let FMINUNC do it for you. So - yes, you got it right. Best wishes Torsten.
 Subject: How to compute gradient to give to fminunc? From: Matt J Date: 22 Jun, 2012 15:35:08 Message: 5 of 6 "Luca " wrote in message ... > > > Now for the difficult part. I need to fit a function to some points using squared difference as the objective function. Is it possible to derive the gradient analitically? I would say no. Am I right?? (my function is a gaussian with 3 free parameters, amplitude, sigma and center. How can I compute analitically d(SD(gaussian,targetPoint))/d(PAR) ?). =============== No, you're not right. The gradient calculation looks very easy, especially if it's Gaussian fitting. The derivative of a Gaussian with respect to its mean, for example, is -(x-mu)*exp(-(x-mu)^2/2) Everything else you need to compute the derivative of the overall function follows very simply from the chain rule. More generally, the gradient of the general form norm(err(x))^2 is 2*Gradient(err(x))*err(x)
 Subject: How to compute gradient to give to fminunc? From: Luca Date: 22 Jun, 2012 16:21:07 Message: 6 of 6 "Matt J" wrote in message ... > Everything else you need to compute the derivative of the overall function follows very simply from the chain rule. More generally, the gradient of the general form > norm(err(x))^2 is 2*Gradient(err(x))*err(x) Blimey, you were right! It seemed so impossible to me at the beginning, it's very counterintuitive. But I get quite an easy expression actually. That will speed up my code a lot! First thing I will implement on monday.

## Tags for this Thread

### Add a New Tag:

Separated by commas
Ex.: root locus, bode

### What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Feed for this Thread