what to do if it iterates 50 times in matlab and want to speed up the program with tic and toc
MATH2070: LAB 5: Multidimensional Newton's Method
Introduction
Last time we discussed Newton'due south method for nonlinear equations in one real or complex variable. In this lab, we volition extend the discussion to two or more dimensions. One of the examples volition include a common awarding of Newton'due south method, viz., nonlinear least squares fitting.
This lab will accept 3 sessions. If you print it, yous might find the pdf version more user-friendly.
Modifications to newton.1000 for vector functions
Suppose is a vector in , , and a differentiable vector function,
with Jacobian matrix ,
Recall that Newton's method can be written
(1) |
(The superscript indicates iteration number. A superscript is used to distinguish iteration number from vector index.)
Equally an implementation notation, the changed of should not be computed. Instead, the organisation
should be solved. As you lot volition run into later in this class, this volition save nigh a factor of iii in computing time over computing the inverse. Matlab provides a special, division-like symbol for this solution functioning: the backslash (\) operator. If you wish to solve the system
then the solution can be written every bit
Note that the divisor is written then that information technology appears ``underneath'' the backslash. Mathematically speaking, this expression is equivalent to
but using \ is about three times faster.
Yous might wonder why Matlab needs a new symbol for division when nosotros take a perfectly proficient sectionalisation symbol already. The reason is that matrix multiplication is not commutative, so gild is important. Multiplying a matrix times a (column) vector requires the vector to be on the right. If yous wanted to ``divide'' the vector by the matrix using the ordinary division symbol, you would have to write But this would seem to imply that is a row vector because information technology is to the left of the matrix. If is a column vector, you would need to write but this expression is bad-mannered, and does not really mean what you want considering the result would exist a row vector. The Matlab method:
is better.
- Exercise i :
- Get-go from your version of newton.m from Lab 4, or download my version. Change its name to vnewton.m and change its signature to
function [x,numIts]=vnewton(func,x,maxIts) % comments
and alter information technology so that it: (a) Uses the Matlab \ operator instead of scalar division for increment; (b) uses the norm of the increment and norm of the sometime increase instead of abs to make up one's mind r1 and the error estimate; (c) has no disp statements; and, (d) has appropriately modified comments. (Leaving the disp statements in will cause syntax errors when vector arguments are present, so exist sure to eliminate them.) - Test your modifications by comparison the value of the root and the number of iterations required for the circuitous scalar (non vector) case from concluding time ( ). This shows that vnewton will still work for scalars. Your results should agree with those from newton. Fill up in the following table (same one as final lab, except the last line). Recall that the cavalcade norm(error) refers to the true error, the absolute value of the difference between the computed solution and the true solution.
Initial estimate numIts norm(error) numIts(newton) 1+i _______ _________ ______________ i-i _______ _________ ______________ 10+5i _______ _________ ______________ 10+i.e-25i _______ _________ ______________
- Get-go from your version of newton.m from Lab 4, or download my version. Change its name to vnewton.m and change its signature to
A complex part revisited
It is possible to rewrite a complex function of a complex variable as a vector role of a vector variable. This is done by equating the real part of a complex number with the commencement component of a vector and the imaginary role of a complex number with the 2d component of a vector.
Reminder: In Matlab, you lot denote subscripts using parentheses. For case, the element of a vector would exist denoted mathematically and f(13) in Matlab.
Consider the part . Write and . Plugging these variables in yields
This tin be written in an equivalent matrix class every bit
(two) |
- Exercise 2 :
- Write a function m-file f8v.m to compute the vector function described to a higher place in Equation (2) and its Jacobian. Information technology should be in the form needed by vnewton and with the signature
function [f,J]=f8v(10) % comments
where f is the ii-dimensional column vector from Equation (2) and J is its Jacobian matrix. Hint: Compute, by manus, the formulas for df1dx2 ( ), df1dx1, df2dx1, and df2dx2. Then setJ=[df1dx1 df1dx2 df2dx1 df2dx2];
- Test your vnewton.k using f8v.m starting from the column vector [1;ane] and comparing with the results of newton.k using f8.m and starting from 1+1i. Both solution and number of iterations should agree exactly. If they do not, utilise the debugger to compare results afterward 1, 2, etc. iterations.
- Make full in the table below. Note that the norm of the error hither should be the same as the absolute value of the error for in the previous exercise, and the correct solutions are
Initial estimate numIts norm(fault) [1;ane] _______ _________ [1;-1] _______ _________ [10;v] _______ _________ [10;1.e-25] _______ _________
- Write a function m-file f8v.m to compute the vector function described to a higher place in Equation (2) and its Jacobian. Information technology should be in the form needed by vnewton and with the signature
At this point, you should be confident that vnewton is correctly programmed and yields the same results as newton from the previous lab. In the following exercise, you will utilize vnewton to a simple nonlinear problem.
- Exercise 3 : Utilize Newton's method to observe the two intersection points of the parabola and the ellipse .
- Plot both the parabola and the ellipse on the aforementioned plot, showing the intersection points. Exist sure to prove the whole ellipse so you can be certain there are exactly ii intersection points. Please include this plot when you send me your work.
- Write a Matlab function m-file f3v.k to compute the vector function that is satisfied at the intersection points. This function should have the signature
function [f,J]=f3v(ten) % [f,J]=f3v(x) % f and x are both 2-dimensional cavalcade vectors % J is a 2 X 2 matrix % more comments % your name and the date
Hint: The vector f=[f1;f2], where f1 and f2 are each zero at the intersection points. - Starting from the initial column vector judge [2;1], what is the intersection point that vnewton finds, and how many iterations does it take?
- Find the other intersection betoken by choosing a different initial estimate. What initial gauge did you use, what is the intersection point, and how many iterations did it accept?
Slow to get started
In Exercise 2, you lot saw that the guess [ten;1.e-25] resulted in a relatively large number of iterations. While not a failure, a large number of iterations is unusual. In the side by side exercise, you lot will investigate this phenomenon using Matlab every bit an investigative tool.
- Exercise 4 : In this exercise, we volition expect more advisedly at the iterates in the poorly-converging case from Practice 2. Considering we are interested in the sequence of iterates, we will generate a special version of vnewton.m that returns the full sequence of iterates. It will render the iterates as a matrix whose columns are the iterates, with as many columns as the number of iterates. (1 would non unremarkably render and so much information from a function call, simply nosotros accept a special need for this exercise.)
- Make a copy of vnewton.thousand and call information technology vnewton1.m. Change its signature line to the following
function [ten, numIts, iterates]=vnewton1(func,x,maxIts) % comments
- Just before the loop, add the post-obit statement
iterates=10;
to initialize the variable iterates. - Add the following statement just after the new value of x has been computed.
iterates=[iterates,x];
Since iterates is a matrix and 10 is a column vector, this Matlab statement causes one column to be appended to the matrix iterates for each iteration. - Replace the error part call at the stop of the function with a disp function call. The reason for this change is that you volition exist using vnewton1 later on in this lab to run across how Newton's method can neglect.
- Exam your modified function vnewton1 using the aforementioned f8v.m from to a higher place, starting from the vector [2;1]. Y'all should get the same results every bit before for solution and number of iterations. Check that the size of the matrix is (number of iterations+1), i.east., it has 2 rows and (numIts+ane) columns.
- Make a copy of vnewton.thousand and call information technology vnewton1.m. Change its signature line to the following
The report of the poorly-converging case continues with the post-obit exercise. The objective is to endeavor to understand what is happening. The point of the exercise is two-fold: on the one paw to acquire something about Newton iterations, and on other hand to acquire how one might use Matlab every bit an investigative tool.
- Practise 5 :
- Apply vnewton1.m to the part f8v.m starting from the column vector [10;i.e-25]. This is the poorly-converging case.
- Plot the iterates on the aeroplane using the command
plot(iterates(i,:),iterates(2,:),'*-')
It is very hard to translate this plot, but it is a place to start. Yous do non have to send me a copy of your plot. Annotation that well-nigh of the iterates lie along the -centrality, with some quite large. - Use the zoom feature of plots in Matlab (the magnifying glass icon with a + sign) to look at region with horizontal extent around [-20,twenty]. It is articulate that most of the iterates appear to be on the -axis. Delight send me a copy of this plot.
- Expect at the formula you used for the Jacobian in f8v.m. Explain why, if the initial estimate starts with exactly, and so all subsequent iterates also take . Remark: You have used Matlab to help formulate a hypothesis that tin be proved.
- Y'all should be able to run into what is happening. I chose an initial guess with , and then subsequent iterates stayed well-nigh the -centrality. Since the root is at , information technology takes many iterates earlier the component of the iterate tin can ``rise to the occasion,'' and enter the region of quadratic convergence.
- Use a semilog plot to run across how the iterates grow in the vertical direction:
semilogy( abs( iterates(2,:) ) )
The plot shows the component seems to abound exponentially (linearly on the semilog plot) with seemingly random jumps superimposed. Please ship me a copy of this plot.
Nonlinear least squares
A common application of Newton's method for vector functions is to nonlinear curve plumbing fixtures by least squares. This application falls into the general topic of ``optimization'' wherein the extremum of some role is sought. If is differentiable, then its extrema are given past the solution of the system of equations for , and the solution can be establish using Newton's method.
The differential equation describing the motion of a weight attached to a damped spring without forcing is
where is the deportation of the weight from equilibrium, is the mass of the weight, is a constant related to the damping of the spring, and is the spring stiffness constant. The physics of the situation indicate that , and should exist positive. Solutions to this differential equation are of the class
where , , and and are values depending on the position and velocity of the weight at . One common practical trouble (chosen the ``parameter identification'' problem) is to estimate the values by observing the motion of the spring at many instants of time.
After the observations take progressed for some time, you take a large number of pairs of values for . The question to exist answered is, ``What values of for would best reproduce the observations?'' In other words, notice the values of that minimize the norm of the differences betwixt the formula and observations. Define
(3) |
and we seek the minimum of .
Annotation: The particular problem as stated tin can be reformulated as a linear trouble, resulting in reduced numerical difficulty. However, information technology is quite common to solve the problem in this class and in more hard cases it is not possible to reduce the problem to a linear one.
In social club to solve this problem, it is all-time to note that when the minimum is accomplished, the gradient must be zero. The components of the gradient can be written as
(One rarely does this kind of calculation by hand whatever more. The Matlab symbolic toolbox, or Maple or Mathematica tin can greatly reduce the manipulative chore.)
To apply Newton's method to every bit defined in (4), the 16 components of the Jacobian matrix are also needed. These are obtained from in a higher place by differentiating with respect to for .
Remark: The function is a real-valued role of a vector. Its gradient,
, is a vector-valued function. The slope of
is a matrix-valued role. The gradient of
, called the ``Jacobian'' matrix in the above discussion, is the second derivative of , and it is sometimes called the ``Hession'' matrix. In that example, the term ``Jacobian'' is reserved for the gradient. This latter usage is particularly common in the context of optimization.
In the post-obit exercise, yous will run into that Newton's method applied to this organization can crave the convergence neighborhood to be quite small.
- Exercise 6 :
- Download my code for the least squares objective function , its gradient , and the Jacobian matrix . The file is called objective.m considering a function to exist minimized is frequently called an ``objective part.'' (Our objective is to minimize the objective function.)
- Use the command help objective to encounter how to apply it.
- Compute f at the solution x=[0.xv;2.0;1.0;iii] to be certain that the function is goose egg there.
- Compute the determinant of J at the solution x=[0.xv;ii.0;1.0;3] to see that information technology is nonsingular.
- Since this particular problem seeks the minimum of a quadratic functional, the matrix J must be positive definite and symmetric. Check that J is symmetric and so use the Matlab eig function to find the four eigenvalues of J to see they are each positive.
- Practise seven : The point of this practice is to see how sensitive Newton's method can exist when the initial guess is changed. Fill in the following tabular array with either the number of iterations required or the word ``failed,'' using vnewton.grand and the indicated initial guess. Notation that the beginning line is the correct solution. For this practise, restrict the number of iterations to exist no more than than 100.
Initial guess Number of iterations [0.xv; 2.0; i.0; 3] ______________ [0.15; ii.0; 0.nine; 3] ______________ [0.15; ii.0; 0.0; 3] ______________ [0.15; 2.0;-0.1; 3] ______________ [0.15; ii.0;-0.iii; 3] ______________ [0.xv; two.0;-0.five; 3] ______________ [0.15; 2.0; 1.0; 4] ______________ [0.fifteen; 2.0; i.0; 5] ______________ [0.fifteen; two.0; 1.0; 6] ______________ [0.15; 2.0; 1.0; 7] ______________ [0.15; 1.99; ane.0; 3] ______________ [0.15; 1.97; 1.0; 3] ______________ [0.15; 1.95; 1.0; 3] ______________ [0.15; 1.93; 1.0; 3] ______________ [0.fifteen; 1.91; one.0; iii] ______________ [0.17; two.0; 1.0; 3] ______________ [0.19; ii.0; i.0; three] ______________ [0.20; two.0; one.0; 3] ______________ [0.21; two.0; 1.0; 3] ______________
Y'all tin can encounter from the previous do that Newton can require a precise guess before information technology will converge. Sometimes some iterate is not far from the brawl of convergence, simply the Newton pace is so big that the next iterate is ridiculous. In cases where the Newton step is too large, reducing the size of the step might make information technology possible to get within the ball of convergence, even with initial guesses far from the exact solution. This is the strategy examined in the following section.
Softening (damping)
The do in the previous department suggests that Newton gets in problem when its increment is too big. One way to mitigate this problem is to ``soften'' or ``dampen'' the iteration by putting a partial factor on the iterate. (5) |
where is a number smaller than one. It should exist clear from the convergence proofs you have seen for Newton's method that introducing the softening gene destroys the quadratic convergence of the method. This raises the question of stopping. In the current version of vnewton.one thousand, you stop when norm(increase) gets small enough, but if increment has been multiplied by alpha, and then convergence could happen immediately if alpha is very pocket-size. It is important to make sure norm(increase) is not multiplied by alpha before the examination is done.
Try softening in the following exercise.
- Exercise eight :
- Starting from your vnewton.m file, re-create information technology to a new file named snewton0.m (for ``softened Newton''), change its signature to
function [x,numIts]=snewton0(f,ten,maxIts)
and alter it to arrange with Equation (5) with the fixed value . Don't forget to modify the comments and the convergence benchmark. Matlab alarm: The backslash operator does not observe the ``operator precedence'' you might expect, so you need parentheses. For example, iii*2\iv=0.6667, but three*(two\iv)=6! - Returning to Practise 7, to the nearest 0.01, how large can the commencement component of the initial estimate become earlier the iteration diverges? (Go out the other three values at their correct values.)
- Starting from your vnewton.m file, re-create information technology to a new file named snewton0.m (for ``softened Newton''), change its signature to
Softening past a abiding cistron can better the initial behavior of the iterates, only it destroys the quadratic convergence of the method. Further, it is hard to guess what the softening factor should be. There are tricks to soften the iteration in such a mode that when information technology starts to converge, the softening goes abroad ( ). One such play tricks is to compute
where the value is a conveniently chosen abiding. You should be able to see how you lot might prove that this softening strategy does not destroy the quadratic convergence rate, or, at least, allows a superlinear rate.
Remark: Note that the expression in (half-dozen) is designed to keep the largest step below one tenth of the Newton pace. This is a very conservative strategy. Note also that another quantity could be placed in the denominator, such as , and so long as it becomes cipher at the solution.
- Do ix :
- Starting from your snewton0.m file, re-create it to a new file named snewton1.m, modify its signature to
function [ten,numIts]=snewton1(f,10,maxIts)
and change it then that is not the constant 1/2, only is determined from Equation (6). Don't forget to change the comments. - How many iterations are required to converge, starting from x(1)=0.20 and the other components equal to their converged values? How many iterations were required past snewton0? You should meet that snewton1 has a slightly larger ball of convergence than snewton0, and converges much faster.
- Using snewton1.m, to the nearest 0.01, how large can the commencement component of the initial guess get before the iteration diverges? (Leave the other 3 values at their correct values.)
- Starting from your snewton0.m file, re-create it to a new file named snewton1.m, modify its signature to
Another strategy is to choose so that the objective role most always decreases on each iteration. This strategy tin can be useful when the the previous approach is too slow. One way to implement this strategy is to add an additional loop inside the Newton iteration loop. Begin with a full Newton step, merely cheque that this step reduces the objective function. If it does not, halve the step size and try again. Keep halving the stride size until either the objective function is reduced or some fixed maximum number of halvings (e.g., ten) have been tried. This tin be written every bit an algorithm:
- On Newton iteration , start with .
- Compute a trial update:
- If or if , then set up and continue with the next Newton iteration,
otherwise replace with and return to Footstep 2.
- Practise 10 :
- Starting from your snewton1.m file, copy it to a new file named snewton2.m, and apply the strategy described above to repeatedly halve until is reduced. If ten halving steps are taken without reducing , accept the smallest value of and proceed with the Newton iteration.
Remark: The above algorithm can be written either every bit a while loop within a for loop or as ii for loops, i inside the other. Indentation should help you keep the loops organized.
Remark: Pay careful attention when writing this program! The value of should commencement over at on each Newton iteration. - Using snewton2.m, to the nearest 0.01, how big can the first component of the initial guess become before the iteration diverges? (Leave the other three values at their right values.)
Alarm: You may detect this strategy is non better than the previous one!
- Starting from your snewton1.m file, copy it to a new file named snewton2.m, and apply the strategy described above to repeatedly halve until is reduced. If ten halving steps are taken without reducing , accept the smallest value of and proceed with the Newton iteration.
Continuation or homotopy methods
Every bit can be seen in the previous few exercises, in that location are ways to better the radius of convergence of Newton'due south method. For some problems, such every bit the curve-plumbing equipment problem above, they just don't help enough. There is a course of methods chosen continuation or homotopy methods (or Davidenko's method, Ralston and Rabinowitz, p. 363f) that tin exist used to find proficient initial guesses for Newton's method. These methods do not appear to be discussed in Quarteroni, Sacco, and Saleri. Some other references:
- http://www.math.uic.edu/%7ejan/srvart/node4.html
- An online article (plain originally by Davidenko) and the references therein, https://world wide web.encyclopediaofmath.org/alphabetize.php/Parameter,_method_of_variation_of_the and also https://www.encyclopediaofmath.org/alphabetize.php/Continuation_method_(to_a_parametrized_family,_for_non-linear_operators)
- Werner Rheinboldt, Methods for solving systems of nonlinear equations, Section 7.3, p84ff.
In the previous section, we were concerned with solving for the minimum value of an objective function . Suppose there is another, much simpler, objective function , whose minimum is like shooting fish in a barrel to find using Newton'due south method. 1 possible choice would exist , for some selection of . For , consider the new objective function
(seven) |
When , reduces to and is easy to minimize (to a result that we already know) simply when , is equal to and its minimum is the desired minimum. All you demand to exercise is minimize for the sequence where you utilize the solution from parameter as the initial approximate for the case. For properly chosen sequences, the event from step will be within the radius of convergence for the adjacent step and the concluding minimum will exist the desired i. The trick is to find a ``properly chosen sequence,'' and at that place is considerable mathematics involved in doing so. In this lab, we will simply take a uniform sequence of values. This method tin work quite nicely, as you lot see in the following do.
- Exercise eleven :
- Suppose that is a stock-still four-dimensional vector and is a four-dimensional variable. Define
function [f,J,F]=easy_objective(x,x0) % [f,J,F]=easy_objective(x-x0) % more comments % your name and the date if norm(size(10)-size(x0)) ~= 0 fault('easy_objective: x and x0 must be compatible.') end F=sum((x-x0).^2); % f(k)=derivative of F with respect to 10(k) f=zeros(four,1); f= ??? % J(chiliad,ell)=derivative of f(k) with respect to x(ell) J=diag([2,ii,ii,ii]);
Copy it into a file named easy_objective.thousand and complete the expression for the vector f.Remark: The chosen value for x0 is problem-related and amounts to a vague approximation to the final solution.
- What are the values of f, J and F for x0=[0;2;ane;2] and ten=[0;0;0;0] and also for x0=[0;2;ane;2] and x=[ane;-i;ane;-i]? You should exist able to ostend that these values are correct by an easy adding.
- Utilize cutting-and-paste to copy the following code to a file named homotopy.m, and consummate the lawmaking.
office [f,J,F]=homotopy(x,p,x0) % [f,J,F]=homotopy(x,p,x0) % computes the homotopy or Davidenko objective part % for 0<=p<=i [f1,J1,F1]=objective(x); [f2,J2,F2]=easy_objective(x,x0); f=p*f1+(i-p)*f2; J=??? F=???
- Place the following code into a Matlab script file named exer11.chiliad.
x0=[0.24; ii; 1; 3]; x=ones(four,1); STEPS=g; MAX_ITERS=100; p=0; % print out table headings fprintf(' p northward x(1) 10(ii) 10(3) ten(4)\n'); for yard=0:STEPS p=k/STEPS; [x,due north]=vnewton(@(twenty) homotopy(xx,p,x0),x,MAX_ITERS); % the fprintf argument is more sophisticated than disp if n>3 || k==STEPS || mod(chiliad,xx)==0 fprintf('%6.4f %2d %7.4f %seven.4f %seven.4f %7.4f\n',p,n,x); end stop
Try this code with x0=[0.24; 2; 1; 3]. Does information technology successfully accomplish the value p=i? Does it get the same solution values for 10 as in Exercises five,6,eight and 9? - Explicate what the expression
@(twenty) homotopy(xx,p,x0)
means in the context of exer11.m. Why is this construct used instead of simply using@homotopy
? - Test x0=[.25;2;1;3]; in easy_objective. Does the exer11 script successfully accomplish the value p=1? To the nearest 0.05, how far can you increase the first component and all the same accept it successfully reach p=1?
- Success of the method depends strongly on the number of steps taken in moving from the unproblematic objective to the true objective. Alter STEPS from 1000 to 750, thus increasing the size of the steps. Starting from x0=[.25;ii;1;3]; in easy_objective, to the nearest 0.05, how far can you increase the outset component and withal take it successfully achieve p=1?
- Returning to STEPS=1000, can you lot start from x0=[0;ii;i;3] and reach the solution? How about x0=[-0.5;2;1;3]?
- Suppose that is a stock-still four-dimensional vector and is a four-dimensional variable. Define
As you can run into, the idea behind continuation methods is powerful. The best that could be washed in Practise 10 is deviate from the exact answer by a few percentage, while in Practise xi you more than tripled the first component and however achieved a correct solution. Nevertheless, some intendance must exist taken in choosing x0 and STEPS. Nix is complimentary, however, and you probably noticed that k steps takes a bit of time to complete.
Quasi-Newton methods
The term ``quasi-Newton'' method basically means a Newton method using an approximate Jacobian instead of an exact one. You saw in Lab 4 that approximating the Jacobian tin issue in a linear convergence rate instead of the usual quadratic rate, and then quasi-Newton methods tin take more iterations than true Newton methods will take. On the other hand, inverting an matrix takes time proportional to , while solving a matrix (afterwards inverting it) takes time proportional to .
For very big linear systems, then, plenty time could exist saved by solving an approximate Jacobian arrangement to brand upwardly for the extra iterations required because true Newton converges faster. In the exercise below, the changed of the Jacobian matrix will be saved from iteration to iteration, and under proper circumstances, re-used in later iterations because it is ``close enough'' to the inverse of the true Jacobian matrix.
Remark: Yous will see adjacent semester that 1 almost never constructs the changed of a matrix because simply solving a linear system takes much less fourth dimension than amalgam the changed and multiplying by information technology. Furthermore, solving a linear system by a direct method involves constructing two matrices, one lower triangular and one upper triangular. These 2 matrices can exist solved efficiently, and the two matrices can exist saved and used once more. In this lab, even so, we volition be constructing the inverse matrix and saving it for futurity utilize considering it is conceptually simpler.
I selection of nonlinear vector role tin be given past the following expression for its components, for ,
where , , and
Note that the first Bernoulli number.
- Exercise 12 :
- Re-create the following code to a function yard-file f10.1000 and fill in values for the Jacobian matrix J by taking derivatives ``in your head'' and using the resulting formulæ in identify of ???.
function [f,J]=f10(x) % [f,J]=f10(ten) % large function to exam quasi-Newton methods % your name and the date [N,M]=size(10); if K ~= ane error(['f10: x must be a column vector']) end n=ii; f=zeros(North,1); J=zeros(Due north,Northward); for yard=one:Northward d=0; for j=thou+one:N d=d+ane/j^2; f(k)=f(k) - x(j)^due north/j^ii; % J(k,j)=df(k)/dx(j) J(thousand,j)= ??? stop for j=one:k-1 d=d+ane/j^2; f(k)=f(g) - x(k-j)^due north/j^two; % J(k,thousand-j)=df(k)/dx(k-j) J(k,chiliad-j)= ??? stop f(k)=f(thousand) + (d+1.e-5)*x(k)^n - k/(1+k)/Due north; % J(k,k)=df(grand)/dx(grand) J(k,k)= ??? end
- It is extremely important for this exercise that the Jacobian matrix is correctly computed. The terms in are all quadratic in x, and for any quadratic function ,
exactly for any reasonable value of .
Choose , , and , use this formula to compute the 9 derivatives
Compare these nine values with the with the values in the Jacobian matrix. If they do not agree, be sure to find your error.
Hint: You can do this in three steps by choosing the three column vectors , and ,
- Re-create the following code to a function yard-file f10.1000 and fill in values for the Jacobian matrix J by taking derivatives ``in your head'' and using the resulting formulæ in identify of ???.
You have seen that when your vnewton part is converging quadratically, the ratio r1 becomes pocket-sized. One way to improve speed might exist to end using the electric current Jacobian matrix when r1 is pocket-size, and just apply the previous one. If you save the inverse Jacobians from step to step, this can improve speed. In the side by side exercise, you will construct a quasi-Newton method that does just this.
- Exercise xiii :
- Make a copy of your vnewton.m file called qnewton.m and change the signature to read
function [x,numIts]=qnewton(func,ten,maxIts) % [x,numIts]=qnewton(func,x,maxIts) % more than comments % your name and the date
- Just earlier the start of the loop, initialize a variable
skip = simulated;
- Supplant the 2 lines defining oldIncrement and increase with the post-obit lines
if ~skip tim=clock; Jinv=inv(derivative); inversionTime=etime(clock,tim); else inversionTime=0; end oldIncrement=increment; increase = -Jinv*value;
- Add the following lines just after the ciphering of r1.
skip = r1 < 0.2; fprintf('it=%d, r1=%east, inversion fourth dimension=%due east.\northward',numIts,r1, ... inversionTime)
Brand sure there are no other print statements within qnewton.m. - In addition, brand a copy of your completed qnewton.thou to a new file qnewtonNoskip.m (or use ``save every bit''). Change the name within the file, and replace The line
skip = r1 < 0.ii;
withskip = simulated;
This will give you a file for comparison times for true Newton against those for quasi-Newton. - The following command both solves a moderately large system and gives the time it takes:
tic;[v,its]=qnewton(@(10) f10(x),linspace(ane,10,3000)');toc
How long does it take? How many iterations? How many iterations were skipped (took no time)? What is the total fourth dimension for inversion as a percentage of the total time taken? - Echo the aforementioned experiment but employ qnewtonNoskip.
tic;[v0,its0]=qnewtonNoskip(@(x) f10(x),linspace(1,10,3000)');toc
How long does information technology take? How many iterations? How far apart are the solutions (norm(v-v0)/norm(five)? - For this case, which is faster (takes less total time) qnewton or qnewtonNoskip? Remark: On my computer, qnewton is about five seconds faster. If you accept a faster figurer, yous may observe a smaller difference, or fifty-fifty that qnewton is slower. If so, you tin try the same comparison with N=4000 or larger.
- Make a copy of your vnewton.m file called qnewton.m and change the signature to read
Remark 1: Equally mentioned before, qnewtonNoskip is slower than vnewton because the changed matrix is constructed explicily. Instead of computing and saving Jinv, you could compute and save the factors of the matrix. A version of qnewtonNoskip programmed this way would accept about the same amount of fourth dimension as vnewton, and qnewton would be faster nevertheless.
Remark 2: The selection skip = (r1 < 0.2) is very much problem-dependent. There are more than reliable means of deciding when to skip inverting the Jacobian, but they are beyond the scope of this lab.
Remark 3: For large matrix systems, especially ones arising from partial differential equations, information technology is faster to solve the system using iterative methods. In this case, at that place are 2 iterations: the nonlinear Newton iteration and the linear solution iteration. For these sysems, it can exist more than efficient to terminate the linear solution iteration before full convergence is reached, saving the time for unnecessary iterations. This arroyo is also called a ``quasi-Newton method.''
Remark 4: When a sequence of similar problems is being solved, such equally in Davidenko'south method or in fourth dimension-dependent fractional differential equations, quasi-Newton methods can save considerable time in the solution at each stride because it is oft true that the Jacobian changes relatively slowly.
Extra Credit: More on minimization (viii points)
In Section 5 above, y'all used an objective role that I provided for y'all. In this section, you will see how a (simpler) objective function is generated.
Suppose you accept experimental data that, based on concrete grounds, you expect to deport as , where is a abiding and a time constant. This would be the case, for instance, if the quantity satisfies a first-lodge differential equation. Given the experimental data the task is to notice values for and that best fit the information.
Denoting the experimental data set as the pairs for , and then the objective role tin be written as
(8) |
and the task is to notice that value of that minimizes . This objective function is similar to, but simpler than, the i given above (3). Like that trouble, it can be phrased every bit a linear trouble (take logs), but we volition treat it as a nonlinear trouble.
In the following exercise, you will construct a function m-file named extra.thousand, analogous to objective.m that computes the function in (viii). Newton iteration practical to this objective role is sensitive to initial guesses in much the same style that is discussed in Department 5.
- Exercise xiv :
- Begin extra.thou with the signature
office [f,J,F]=actress(x)
and add advisable comments. - Construct 200 artificial data values for t between 0 and 1, and for v based on values with and .
- Construct the objective value co-ordinate to (8).
- Compute the derivatives and . You should take these derivatives using pencil and paper (or some computer algebra program such equally Maple, Mathematica or the symbolic toolbox in Matlab) then write Matlab code in extra.grand.
- Compute the Jacobian matrix (or Hessian matrix) past computing the fractional derivatives and writing Matlab code for them.
- Apply your extra.m to compute f, J and F for 10=[1;.2]. This should yield , .
- Check f using first order finite differences with and in the following way.
- Compute [fa,Ja,Fa]=extra([2;.3]).
- Compute [fb,Jb,Fb]=extra([2.001;.3]).
- Compute [fc,Jc,Fc]=extra([2;.3001]).
- (Fb-Fa)/.001 should gauge fa(one), and (Fc-Fa)/.0001 should approximate fa(2).
- Similarly, check that the finite difference approximations for the derivatives are in judge understanding with the components of J.
- Double-check that at , a positive definite, symmetric (nonsingular) matrix.
- Check that vnewton tin find the right solution starting from the correct solution ( ).
- Bear witness that vnewton requires fewer than ten iterations to notice the correct minimum starting from [1;0.nine] but that starting from [one;1.0] requires more than 100 iterations. (If you lot do not find these vaues, your J is probably not correct. Repeat the finite deviation checks with and and make sure the approximation mistake is roughly halved.)
- Begin extra.thou with the signature
Dorsum to MATH2070 page.
Mike Sussman 2016-09-19
Source: http://www.math.pitt.edu/~sussmanm/2070/lab_05/index.html
0 Response to "what to do if it iterates 50 times in matlab and want to speed up the program with tic and toc"
Post a Comment