Images, posts & videos related to "Numerical Integration"
I am a mathematician by profession, and am trying my hand at F#.
This is not my first exercise, but I would like some feedback especially on the testing part - I'll soon be building a much bigger system for more sophisticated numerics.
Pastebin: https://pastebin.com/R6z08kER
Gist: https://gist.github.com/OrlandoRiccardo/543fb187e6f00589a7f99cbd8884623b
I've got an integral from 2 to infinity of 1/[x*ln^2(x+sqrt(x))] to integrate numerically using matlab. First I removed singularities and made it into a proper integral by substitution y=1/x, which gave me integral from 0 to 1/2 of 1/[y*ln^2(1/y+1/sqrt(y))], which still have a singularity and I don't know how to remove it in order to integrate this numerically. Can you help me with this?
Self studying BC here. For some rather inexplicable reason, I just suck at numerical integration (Riemann sums, midpoint, trapezoidal, etc). This is specifically the case when the info is in tables with unequal intervals. It always costs me two points when it comes up. I've looked for resources but there's nothing specifically geared to applying those techniques with tabular data. The FRQ solution guides don't make much sense to me.
Could someone give me some resources to work with or guide me through it with an example? That would be much appreciated.
When can a differential equation be solved through analytical integration, and when can it be solved ONLY through numerical integration. Or in other words: What boxes must a differential equation need to "tick" before I know it can be analytically integrated?
I am trying to understand how to use the error bound of a trapezoidal numerical integration.
If the error bound is
abs(E) <= (K(b-a)^3 ) / (12n^2)
and
f''(x) <= K
How do I find (or reasonably estimate) K if I don't have the exact f(x) ?
Thanks.
Edit: To elaborate, I'm trying to propagate errors through a rather complex equation that includes a trapezoidal integration. I have the error due to my 6 measured variables accounted for, but I haven't been able to figure out how to account for the error to to the numerical integration. Any help on this would be greatly appreciated.
The graph I got using the code in the link is VERY peculiar to say the least. I have no idea what's causing it.
I want to solve something like min_x \int f(x,y)dy automatically. Not sure if there's a way to do this in JuMP with a numerical integration method or if I have to write out the integral approximation.
As far as I know, there's no mathematical way to analytically express the position of a celestial body orbiting another body in a two-body problem at time t, just like calculating precisely the circumference of an ellipse.
Am I wrong? If I'm right is there an equation that approximates r(t) and ΞΈ(t)?
If it helps, I'm trying to plot the density wave theory for my students so they can see where the individual stars go.
Edit: I'm talking about elliptical orbits because there are no perfectly circular and parabolic orbits in real life.
I'm trying to look for something similar to scipy.integrate. There are several packages out there performing adaptive and other advanced integrations provided a generic function, but surprisingly I can't find one that integrates a function sampled on a grid.
I understand that I can write a Simpson's rule function myself, but if there's a package that has verified tests etc that'll be great. For some reason I'm not being able to import scipy correctly on the computer cluster, so that's not an option.
I am building a model of random variables in pymc3 that involves a numerical integration of some of my variables (and some data arrays), for which there is not an analytical solution. In the original pymc, I can use numpy.trapz or scipy.quad, but these do not work with the theano variables of pymc3. Surely there is a way to do non analytic integrals? Does anyone know how to accomplish this? For an example, see my unanswered question on StackOverflow.
For my numerical methods project I was given a task to do double integration of a function f(x,y), where I need to use the Simpson's 3/8 rule in respect to x, and then use the Composite Trapezoidal rule in respect to y.
I've implemented the functions for both two methods but I've been struggling how to use those methods to integrate x while treating y as a constant, and to integrate y while treating x as a constant. For example, I would like my Trapezoidal function to take in the function f(x,y) and do the method for the y variable, treat x as a constant, then have an output value having the form of a function f(x,y).
I tried searching for a long time and couldnt really come up with any example anywhere so far, and I would appreciate the help.
this is the Trapezoidal Rule function. I tried putting f(x(i),y) in place of f(x(i)) for example, etc. but so far Im stuck.
EDIT: figured it out! Newton's 3/8 Method
Hi All,
I'm working on a simple structural dynamics FEA program in MATLAB. For now I am using a thin rectangular plate as the object to be analyzed, and I'm meshing it with rectangular plate elements with three degrees of freedom per node (one translation, two rotation). I have successfully implemented a normal modes solution that works seamlessly, however I am now working on a transient base excitation solver and hitting a hard wall with the numerical integration. My solutions are blowing up except for the case of very small timesteps (~1e-7 sec), even when I am using only ~100 elements (~350 DOFs).
Questions:
Some background on my project: The test/control plate for my program is 6" x 4" x 0.125". I'm integrating the response in modal coordinates and only using contributions from the 10 lowest modes, with the highest having a frequency of 23,142 rad/s (3,863 Hz). The excitation is a 1 lb, 5 ms half-sine shock pulse on an interior node ((x, y) = (1.75 in, 1.75 in)) with the four corner nodes pinned. I am currently using the Newmark-beta method for integrating the response with beta = 1/6 and gamma = 1/2. These are all mostly arbitrary, but this is my starting point.
I've run a bit of a numerical stability case study using this plate, and I can easily see that the stable time increment is a function of (modal) damping. It follows the behavior of the function f(x) = x*e^(-x) or something very similar. At any rate, I cannot seem to find an answer on Google on stable time increments and don't have a formal academic background in numerical methods.
Any and all help is greatly appreciated!
I have to do some surface integrals over a sphere as part of my thesis. What methods do people generally use for that? I've been using trapezium rule up to now (quite easy to extend from a line to a rectangle) but I don't know if that works on a sphere.
Hello all! I had a question about numerically calculating a 2D integral over a rectangular region. The TLDR is, if I use nested 1D methods, does the order of these methods matter, either in terms of the value of the result or the speed at which the integral is calculated?
The integral is of the form \int f(x,y) dx dy, where the bounds of y are from 0 to a, and the bounds of x are from 0 to infinity. The specific form of f is rather complicated; it takes a page of text for me to define all the terms in it. But key properties is that f decays exponentially in x and y (as in both e^-x^2 and e^-y^2 ), is very well behaved in x (holding y constant), and has an oscillatory region in y (holding x constant). I can quickly calculate the location of these regions ahead of time, and their location is independent of x. Speed is an important factor here. f is also even in both x and y.
These behaviors lead me to think I should use nested 1D integration schemes: Gaussian-Hermite to integrate over x, and a recursive adaptive scheme over y (first trying Simpsonβs, then going from there), while calculating the oscillatory region separately from the rest of the domain. The Gaussian-Hermite quadrature would also be adaptive; the basic plan is compute it using more and more abscissa until the relative error is good enough (same idea for adaptive Simpsonβs obviously).
Given my strategy, there is a choice to make: I can numerically integrate over x given a value of y, and then integrate that over y, or do the reverse. Or, put another way, I can calculate
G(x) = \int f(x,y) dy using adaptive Simpsonβs, and then \int G(x) dx using Gauss-Hermite
Or
H(y) = \int f(x,y) dx using Gauss-Hermite, and then \int H(y) dy using adaptive Simpsonβs.
Of course, changing the order of integration wouldnβt change the result if I were to analytically calculate the integral. But to carry out the integration numerically, is there an obvious right choice, either for calculating a more accurate value or for converging on that value quickly ? My gut feeling is that one may be more efficient than the other, but itβs not obvious to me which one it would be.
Hi I want to use matlab to perform numerical integration on a function that looks something like
https://preview.redd.it/04cwipndery11.png?width=769&format=png&auto=webp&s=18ceebd1b2d6b374438c1f15a58df72c1af4bd54
where a,b and c are constants between the limits of (c,a) but I want to iterate over various values of c.
The following is the for loop that I currently have written to do this
for Ed=0.1:0.1:10
fun = @(T) (1 - b^2 .*T/a)*(1./T.^2)*(exp(-0.5.*((T-6.1875)/2.11612).^2))*(0.8.*T/Ed);
value = integral(fun,Ed,T_hat);
dpa_rate_linear = [dpa_rate_linear;value];
end
but I keep getting the following error
Error using *
Inner matrix dimensions must agree.
Error in @(T) (1 - b^2 .*T/a)*(1./T.^2)*(exp(-0.5.*((T-6.1875)/2.11612).^2))*(0.8.*T/Ed)
Error in integralCalc/iterateScalarValued (line 314)
fx = FUN(t);
Error in integralCalc/vadapt (line 132)
[q,errbnd] = iterateScalarValued(u,tinterval,pathlen);
Error in integralCalc (line 75)
[q,errbnd] = vadapt(@AtoBInvTransform,interval);
Error in integral (line 88)
Q = integralCalc(fun,a,b,opstruct);
Could someone help me with my mistake?
I'm attempting to write a FORTRAN program that calculates the magnetic field, B, at any point outside of a bar magnet. I have asked about this already but I figured my questions weren't clear enough.
I'm going to use a first order euler scheme, where each side of the bar magnet is split into small cells, each with centres at (xi,yi,zi). I know I can ignore all of the sides that have any z values, and just focus on the top and bottom sides that are orientated in the x-y plane. So the method says this:
[;\int f(x,y,z)dS;] = [;\Delta S;] [;\cdot;] [;\sum;] f( [;x_{i};] [;y_{i};] [;z_{i};] )
where the integral is over the surface S, and the summation is over i.
Delta S, each area, is given by [;\hat{n};] [;\cdot;] [;d\vec{S};] , so if the cell is oriented in the x-y plane it's just [;\Delta x;] [;\cdot;] [;\Delta y;] .
Here is a screenshot of the specific method instructions with a figure that demonstrates it: http://prntscr.com/idnaik
The function for the magnetic field is this:
[;\vec{B};] [;(\vec{r});] = [;\frac{\mu _{0}}{4\pi };] [;\cdot;] [;\int;] [;\frac{(r-{r}'M(r)[;\cdot[;\hat{n}];}{|(r-{r}')|^{3}};]
Where the integral is over the surface S
The ultimate aim is to use python to then trace a field line in 3d, and hopefully get the shape you'd expect for a bar magnet. I'm struggling to understand this method. I've tried to construct a flow chart, but can't get very far. Any help to understand it would be appreciated, and also any help with the flow chart would be great.
Here's my flow chart: http://prntscr.com/idnbn9
Thanks (I posted this a minute ago, but the LaTeX wasn't showing up- i'm sure it's formatted correctly)
Hello, everyone. :)
There's this handout I was given for my Numerical Methods class, and it's asking me to write a program (which is not what I need help with) to calculate the integral of e^(-x^2) dx from 0 to 1 (using the composite Simpson's 1/3 rule) with a given tolerance, where this error control is achieved using the "Richardson method" (which is what I need help with), but when I research that online, I either just find what to seems the be the "differentiation version", with the central divided difference, [f(a+h) - f(a-h)] / (2h), or a bunch of stuff that doesn't seem like what I am looking for (but maybe it is, and I'm just not noticing).
Could someone please point me to a resource or resources that are as close as possible to what I need or provide me with an explanation themselves?
Any input would be GREATLY appreciated!
P.S.
Do let me know if you need more information from me!
I sat a test today and was asked to numerically integrate a data set with had 5 or 6 data points (can't remember which) using Simpsons 3/8 rule. How is this possible as the rule has place for only 4 data points?
I am calculating a gas diffusional profile in a solid sphere for which the first and second derivatives of the curve are both negative. This causes a systematic bias such that linear interpolation/trapz causes volume calculation errors to rise asymptotically under certain circumstances. I have been told that Romberg integration is a good antidote to the problem, however, it appears that a requirement of the Romberg method is to have a smooth, continuous function. Unfortunately, this curve can not be described by such a function.
I am wondering if someone more skilled than myself could help me come up with an interpolation scheme (piecewise?) that would give me the continuous function that could be fed into a Romberg script.
I have also tried using Simpson's rule integration, but it seems my implementation is causing significant errors as well. I am probably doing something stupid wrong, but I have been staring at this stuff too long to really even tell. The approach here is to calculate and add the volume contributed by the concentration in each sphere between two spatial nodes, where v is the height of the concentration curve at radial position r, j is the index for spatial nodes and n is the number of spatial nodes:
vol = simpsons([v(1) v(1)],0,r(1),[]) * (4/3 * pi * r(1)^3);
for j = 2:n
vol = vol + (4/3 * pi * r(j)^3 * simpsons(v(1:j),0,r(j),[]) - vol);
end
I am probably making a stupid mistake here, and I'm not too familiar with interpolation schemes in MATLAB, so any advice or help on either front is greatly appreciated.
Here is an example of one of the curves I am trying to integrate over. v is on the y-axis and r (fractional) is on the x-axis.
Article: http://cwzx.wordpress.com/2013/12/16/numerical-integration-for-rotational-dynamics/
This article may be of interest to game developers, particularly those that implement rotational motion, such as rigid bodies.
The article explains how the rotational equations of motion are more complicated than the translational versions (position, velocity, acceleration). The rotational equations are then solved under the assumption of constant angular acceleration. This allows for rotations and angular velocities to be advanced forwards in time by a time step. Unlike a generic integration scheme, this directly takes the rotational nature of the system into account, and the approximations that are made are explicitly stated.
Both rotation matrices and unit quaternion versions are included.
So, I have an ODE (Actually there's an additional noise term but I don't think that's relevant), of the form
dx/dt = Ξ΅ f(x) + h(x) where Ξ΅ is not that large, but where f(x) is computationally hard to evaluate. If we take a timestep Ξ, then an n-th order Runge Kutta method will give a local error of order Ξ^(n+1) .
Since Ξ΅f(x) is small, an error in this term will give a smaller contribution to the total error than an error in the g(x) term. Will it give good results if we give the Ξ΅f(x) a larger time step, either directly (making it contribute only once every so often), or indirectly by using, say, 4th order Runge-Kutta for the g(x) but 1st order Runge-Kutta for f(x)?
I hope this is clear, thanks for reading!
So here is the problem I've been thinking about. I apologize in advance, I don't know how to make mathematical markup on reddit. Please feel free to ask for clarifications if needed.
Statement of the Problem
R(Y_i) = R_i = Integral[dx * f(x) * P(x,Y_i),{from 0 to +infinity}]
My solution:
I assume f(x) doesn't change too much over small enough intervals dx. I can then approximate f(x) as a series of N constant values over small intervals. Formally,
f(x) = Sum[w_j * g_j(x),{j = 0 .. N}]
Where w_j is the set of approximating constant values and g_j(x) is a set of functions which take the value 1 for the appropriate interval, j, and 0 everywhere else. The j intervals are simply equally spaced subdivisions of the finite support of f(x), S (though they don't have to be).
I then have,
R_i = Sum[w_j * P_i,j, {j = 1 .. N}]
Where P_i,j is the result of the integration of the known function P(x,Y_i) over the j'th interval.
The problem is now reduced to inverting the matrix P_i,j and calculating inv(P_i,j)*R_i to solve for the coefficients w_j. The resulting solution is adequate for my current needs but I do have some questions with an eye to future improvements:
Any input would be much appreciated!
(edit: formatting)
I'm wondering if there isn't a way to add up sums in the ti-84 using a syntax similar to say where I have a function f(x) and can just enter f(0)+f(0.5)+f(1), etc. I know there is the summation key and there are also programs you can add to the calculator but specifically wondering if there's something like this already built in.
I'm working on a project where I will be reading in a TDM data stream from an ADC attached to a sensor and doing some simple mathematical operations (Calculating RMS, peak detection, integration, derivatives, etc) on the data and displaying it to a user. This will all be done using the microzed eval kit.
I have two possible options:
Read the TDM data stream into the PL, place each raw channel into a shared register, and have the PS access each raw sample, and do all of the processing in software...
Read the TDM data stream into the PL, and have the FPGA do the numerical integration/derivative/etc on the fly, and store the raw sample along with the processed values into registers, leaving the PS open to do other things and access the latest data when it needs to...
I'm leaning towards the second option, which I think is a slicker solution. This way the PS can just peak at a register pertaining to the processed data when ever it needs it and won't have to waste time calculating the latest value. The latest processed value will be updated on the fly by the FPGA. I've never done any actual mathematical operations in VHDL, I'm a software guy. I wanted to know if anyone has any experience performing numerical integration, derivatives, etc in the fabric itself. If so, any helpful links/hints/example VHDL code? Are there ready made open source IPs for things like this? Is this more trouble than its worth?
Need help with the second exercise of my intro to Matlab course here are the specific requirements: http://imgur.com/NWyxhpu
This summer I am working on my matlab programming skills and I need some help with numeric integration. I am currently modeling a 3 body problem with the earth, moon and spacecraft. I have a script written with a function for the forces, accelerations, velocities and position. My issue is that I am not sure how to totally implement these equations correctly even though I have the written out. I will gladly post the code I have written so far so I can get some feedback on the next step to take.
I'm trying to do some numerical integration with numpy's trapz function, but I'm getting some peculiar behavior: the measured area under the curve seems to decrease when the slope is negative. I've got a symmetrical shape made up of ordered pairs and a function that plots and integrates it, and oddly enough it's reporting that the area is negative. Does anyone see the issue?
Pastebin to code: http://pastebin.com/38yEv39Y
System: Linux Mint, Python 3.5.2
Okay, so, I'm trying to use odeint correctly. I've gotten to the point where I can get solutions over time intervals consistently. I'm specifically using Python(x,y) with Cantera to model chemical kinetics and combustion processes.
My problem is that odeint often hangs my Python Interpreter regardless of my relative/ absolute tolerances, and regardless of my time steps.
Can anyone point me in the right direction on how to solve partial differential equations using odeint efficiently, and how to properly set my parameters?
A lot of you probably know a lot about numerical integration. If you do, could you provide the names of books or papers that one should read to get up to date on the techniques in the field?
http://i.imgur.com/SHnlHCM.png
^That is the question I'm trying to finish, I've done parts (a) and (b) and the first part of (c). The bit that I'm struggling with is the highlighted part.
So I've assumed the error term is of the form that was given. So I need to get a function f and take the fourth derivative. I'm thinking this will have to end up being a constant so that epsilon (sorry dont know the actual name of the greek letter) can disappear.
Would I just need to let f(x) = x^4 ? That just doesn't seem right to me..
Also have one more question, how would you use the trapezium rule to approximate an integral? E.g. integrate sin(x) from 0 to pi
When I apply the trapezium rule I get 0..
I will use example to make my question clear:
The theta method when using theta = 0.5
Local truncation error in this case is of order O( h^3 ) (read end of page 15) where h is the step size.
My understand is that, this means:
error <= C|h^3|
log error = 3 log(h) + C
if I draw the graph it should have slope 3
However, it has slope 2 instead... incidentally, there is something called the "order of the method" which in this case is equal to 2 (read beginning of page 44) as well.
My question is:
I posted this question on math.stackexchange and I have no idea what is wrong with that website.. after 24 hours I had only 7 views. This happened before and reddit helped me instead, so I hope this will happen again :)
I made a mistake in the measurement, I wasn't measuring the local error correctly. Now that it is corrected, it does give "3" and not "2".
Hi, I have a problem we have been given in an assignment for a class. A graph of output over time is given to us, along with the exact values, 20 values a second for 30 seconds. I understand the concept of the numerical integration - finding the area for each split and finding the sum of all, but i cant figure out how to add together the areas.
How can I take the values i need for one calculation, and then add it to the area for the second one and so on in VBA?
Sorry for what I imagine is a very basic thing, we have only been doing this a couple weeks and my notes are no help to me, as all they do is explain the theory of adding the areas together, and dont actually show us any of the code. Been googling for quite a while now and cant find anything that makes sense to me either.
Thanks
I have an vector of position values. If I take the diff(x) to get dx/dt, I see that there are a few outliers (The data is supposed to have a constant velocity). If I correct these values, is it possible to integrate to get the approximately correct vector x?
Greetings,
I am working on my first Haskell project, and in this project I need to numerically evaluate some 2D integrals over rectangular regions. I am using hmatrix which interfaces with some numerical integration routines from GSL.
To start, I would like to replicate the double integral example from Wikipedia.
let f x y = x^2 + 4 * y
Now, my understanding is that I want the inner integral to be evaluated first, which should return me a function that I can then integrate over the outer bounds of integration (y = 7 to 10). However, I am having trouble translating this into Haskell. There is an example on p.19 of the hmatrix pdf, but I am not seeing how to apply that to this problem.
Help would be very much appreciated!
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.