It's like he had to quit his job as head engineer at NASA and is angry because he has to teach us bums now. I get a good kick out of his little rants though.
Hello everyone, my Calculus class just reached Taylor/Maclaurin polynomials. While I understand how to do them, I'm confused as to their exact use. If you had time to differentiate a function so many times to make a mere approximation, and the facilities to do as such, why can't you just calculate the function directly? Plugging numbers into the approximation also often seems far more difficult than doing the equation itself.
Any help would be greatly appreciated!
I see this formula tossed around a lot to "demonstrate"' that you shouldn't be able to see something from X spot on Earth due to curvature. I know that this formula defines a parabola but i also see people say it's a neat approximation for how much you wouldn't be able to see due to Earth's curvature.
Though i noticed that it's a bit of a messy formula since depending on where you are and where you're looking, the curvature changes (if i were on the west coast of Canada and looked at the West, i'll see a much more noticeable curvature than if i did the same the US' west coast), it also seems to get really extremist due to it being squared and it if i draw it on a cartesian plane it'll be a parabola with serious differences the more miles you add.
I know that Flat Earthers use this formula as the one and only formula to calculate Earth's curvature but it lacking the important variable of the position you're in and where you're looking seems a bit sketchy, not to mention it gets really extremist the more miles you add.
So is this equation really a nice approximation or is it just a geometrical mess?
Are there any good sources on bad approximations of pi over the years? I've considered making a video about pi when Pi Day comes this year, but I feel like conventional approaches/topics have been done to death.
I know that Numberphile did a video about the 3.2 law, and there's misconceptions about a certain passage in the Torah being incorrect, but that's about all I know.
I know that each iteration of the Matrix lasts roughly 100 years before a reboot is needed, admittedly I do not know if the various reboots are completely different incarnations (for instance, I know there was a “paradise” version of the Matrix and also a “nightmare” version of the Matrix at various points). My question is, assuming the form of the Matrix we’re introduced to has existed for a few cycles and the whole “Actual late 20th Century” version isn’t just the latest attempt; did the Machines somehow get people to ignore the fact that the same decade has existed for a literal century or does the clock get constantly reset to 1900 and end in 1999? I know, at the very least, everything the machines use for their various functions as programs existed by the year 1900, such as phones for the agents to enter the world and interact. But also, the Machines do not want life to be extreme suffering for the humans, yes? Because I do believe recreating the world wars is likely beyond what the Machines can even do?
Can anyone help me out here?
It seems to me that log(x+1/2)+gamma is a much better approximation of the harmonic numbers than log(x)+gamma is. log(x+1/2) gets arbitrarily close to log(x) as they approach infinity, so maybe the difference doesn't matter much, but at every step of the way, log(x+1/2) seems to be a much better approximation. How come this is so, and how come log(x) is used so often instead?
There seems to be some research activity in the past years but I'm not sure if any of it is relevant outside of the field itself, which is not large at all. I know about Tao's recent advance in Sendov's conjecture, which truly seems intriguing and interdisciplinary. Also, I'm fairly convinced that the theory is useful because I've been taught quite some applications in my numeric analysis class when I was an undergraduate. The question is more like: is it an interesting field that not many people care about or is it simply dying?
I'm asking the question because I have the option to pursue approximation theory for my master's thesis. Kolmogorov-type inequalities, to be precise. This question may fit more into the career and education thread, however I think it is general enough to get its own thread since it is about a field of math and may (or may not) generate a healthy discussion.
I have a fun little exercise for you all:
Pick a positive number and call this number x.
I will give you a starting value of c = 7. Now, take out a calculator and evaluate (x-4)/(c-4), and assign this to be the new value of c. Repeat 10 times.
Finally, evaluate 2-c for the last value of c you calculated. What is this value? I claim it is an approximation of the square root of x, surprisingly enough! Check and you'll see.
Personally, I think it's a pretty nifty trick. If want to know how I came up with this seemingly arbitrary process and why it works, I recently wrote a math paper that you can find on arXiv that details the whole thing. If you have the time, please read it and let me know what you thought of it! Thank you!
Hey so I have been trying to work on a personal project in digital electronics and at the moment, im exploring on approximations for 1/x for x between 0 and 1. I have come across the tailor series but implementing that on an fpga/asic is very hardware consuming due the to multiple multiplications required. Hence, I reach out to this community to suggest some other approximations to evaluate 1/x within the values 0 and 1.
I hope the mathematician around here could provide me with an answer. I was learning about zeta function from Wikipedia, and playing around on Wolfram alpha. Then a relevation struck, and I tested out the following computation.
Let n be an integers >=2, look at the formula:
C(n)=(2^n+1 -2 )pi^-n zeta(n)Gamma(n)
Why this formula? Usually, even integer value of zeta is described in term of Bernoulli number. However, Bernoulli number is 0 for odd integers, so the formula doesn't work there. But...there is a different sequence, related to the Bernoulli number for even integers, but give different meaningful value for odd integers as well. This is Euler's zigzag number A(n), also known as alternating permutation number, up/down number, or Andre's number. The number A(n-1) is related to the Bernoulli number B(n) so the index is shifted by 1. This give me the question: can you compute the zigzag number from values of zeta function, and vice versa?
So I wrote a formula that compute the zigzag number, starting with zeta function, for all even integers, using the known relationship between zeta function, Bernoulli number and zigzag number. This is the formula above. Now what would happen if I plug odd integers >=3 into the formula? I plugged this in Wolfram alpha, and....
The values are extremely close!!! Here is the sequence for A(n):
Looking at only even-index from 2 onward, the first few terms are 1,5,61,1385,50521,2702765
What about the value of C(n)? Plugging into Wolfram alpha, the first few values (ignoring the digits after the decimal point) are, for n=3,5,7,9,11,13:
1., 5., 61., 1385., 50521., 2702768.
Yeah they are not exactly equal, but it's still staggering how close they are. The first time the error became >1 is at n=13. And I checked until the end of the listed sequence (n=27), it predicted over half of the digits correct.
My suspicion is that 0<=C(n)-A(n-1)<=A(n-1)^1/2 for all odd integers n>=3, but I'm not sure.
EDIT: just to be safe, I also compare A(n-1) against (2^n+1 -2 )pi^-n Gamma(n) which has no zeta factor. It does predict a few digits correct, but the accuracy drop dramatically. This suggests that zeta(n) is an important factor that make the prediction close, even though it's almost equal to 1.
But I don't know enough math to see why. And after searching around the World Wide Web, I see no mentions of this relationship between zeta function and zigzag number, even though it seems so ob... keep reading on reddit ➡