Quantum probability distributions for a particle trapped in an octagon-shaped well reddit.com/gallery/mbhlph
πŸ‘︎ 528
πŸ“°︎ r/DataArt
πŸ’¬︎
πŸ‘€︎ u/hudsmith
πŸ“…︎ Mar 23 2021
🚨︎ report
[OC] Probability Distribution in Monopoly Using Markov Chains
πŸ‘︎ 3k
πŸ“°︎ r/dataisbeautiful
πŸ’¬︎
πŸ‘€︎ u/challenging-luck
πŸ“…︎ Jan 23 2021
🚨︎ report
Do you need help with the expert challenge 'Land 5 hits with spiny shells' below is a table of the best position to be in to get certain items taken from Mariowiki - item probability distributions 🟨 Easy 🟧 medium πŸŸ₯ hard Spiny shell 4th - 7th place with the best positions being 5th & 6th
πŸ‘︎ 222
πŸ“°︎ r/MarioKartTour
πŸ’¬︎
πŸ‘€︎ u/Cupid_Stunt_MKT
πŸ“…︎ Feb 16 2021
🚨︎ report
Probability distributions for the first 36 eigenstates of a particle trapped in a heart-shaped potential well!
πŸ‘︎ 220
πŸ“°︎ r/quantum
πŸ’¬︎
πŸ‘€︎ u/hudsmith
πŸ“…︎ Mar 02 2021
🚨︎ report
porco: Library for composing probability distributions docs.rs/porco/0.1.3/porco…
πŸ‘︎ 56
πŸ“°︎ r/rust
πŸ’¬︎
πŸ‘€︎ u/myli34
πŸ“…︎ Mar 21 2021
🚨︎ report
[D] Does the NN really learns probability distribution?

I am currently trying to understand how does the Neural Net learns probability distribution.

At first glance, this seems impossible, since basically neural net is a deterministic non-linear function.

As for the VAE, the encoder of VAE takes such x as an input and results in the main parameters of distribution(mostly, mean and covariance of gaussian distribution). This seems plausible; just by adding some random variable epsilion ~ N(0, I) you can construct Normal distribution.

But for Decoder, though many literature assumes the decoder has the form of probability distribution(which is P(X|Z)), yet they don't. Unlike the encoder part, there is no randomness in the structure of VAE decoder.

Moreover many decoder of the VAE assumes that they have normal distribution, but how could they be so sure?

Is there anything that I am missing?

πŸ‘︎ 7
πŸ“°︎ r/MachineLearning
πŸ’¬︎
πŸ‘€︎ u/BodybuildingPhD
πŸ“…︎ Mar 26 2021
🚨︎ report
Probability Distribution in Monopoly Using Markov Chains
πŸ‘︎ 1k
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/challenging-luck
πŸ“…︎ Jan 27 2021
🚨︎ report
[OC] Alternate Point Buy system which encourages diverse or more flexible characters (at the expense of easier min-maxing.) Point values are derived from the probability distribution of 4d6 drop 6, and the total number of points is estimated to produce reasonable characters. More details in comment
πŸ‘︎ 8
πŸ“°︎ r/DnD
πŸ’¬︎
πŸ‘€︎ u/GameCounter
πŸ“…︎ Mar 08 2021
🚨︎ report
Probability distributions for the first 36 eigenstates of a particle trapped in a heart-shaped potential well!
πŸ‘︎ 173
πŸ“°︎ r/generative
πŸ’¬︎
πŸ‘€︎ u/hudsmith
πŸ“…︎ Mar 03 2021
🚨︎ report
Probability Distribution in Monopoly Using Markov Chains
πŸ‘︎ 518
πŸ“°︎ r/DataArt
πŸ’¬︎
πŸ‘€︎ u/jmerlinb
πŸ“…︎ Jan 27 2021
🚨︎ report
Does the Bayesian MAP give a probability distribution over unseen data?

I'm working my way through the Bayesian world. So far i've understood that the MLE or the MPA are point estimates, therefore using such models just output one specific value and not an distribution.

Moreover, vanilla neuronal networks do in fact s.th. like MLE, because minimizing the squared-loss or the cross-entropy is similar to find parameters that maximize the likelihood. Moreover, using neuronal networks with regularisation is comparable to the MAP estimates, as the prior works like the penalty term in error functions.

However, i've found this work. It shows that the weights [; W_{PLS};] gained from a penalized least-squared are the same as the weights [; W_{MAP};] gained through Maximum a posterori:

https://preview.redd.it/1288eloxbtn61.png?width=364&format=png&auto=webp&s=b0a1bd2cc653081efb9f7331ff05407d5f2ddc73

However the paper says: "The first two approaches result in similar predictions, although the MAP Bayesian model does give a probability distribution for [; t_*;] The mean of this

distribution is the same as that of the classical predictor [; y(x_*; W_{PLS});], since [; W_{PLS} = W_{MAP};]*

What I don't get here is how can the MAP Bayesian give a proability distribution over [; t_*;], when it is only a point estimate? Consider a neuronal network - a point estimate would mean some fixed weights so how can there be a output probability distribution? I thought that this is only achieved in the True Bayesian where we integrate out the unknown weights, therefore building something like the weight averaged of all outcomes, using all possible weights.

Can you help me?

πŸ‘︎ 2
πŸ’¬︎
πŸ“…︎ Mar 18 2021
🚨︎ report
How do probability distributions relate to modeling?

In my career, I've never worked with raw data that looks like any sort of probability distribution. Overwhelmingly, I have some equation that is supposed to represent how the data vary with some other data. Usually, signal vs time.

I assume I'm supposed to plug the real time into the model then subtract the model from the actual signal and treat that residual as a probability, but this has made learning statistics very hard for me because it's not immediately obvious how most basic examples can be used or applied. For example, it seems like any time or ordered input results in non-independent output. It has to be non-independent because time has an order. One is never gonna come after two. Three is never gonna come before two. Similarly, I certainly hope my signal isn't independent of time. Yet independence seems to be a requirement for almost every statistical analysis I can find.

Basically, I'm having trouble applying stats to... anything. I mean I don't care about correlations. I care about causes and more specifically I care about mechanisms that imply mathematical relations. How do I test that? What do I google to read about testing that?

πŸ‘︎ 2
πŸ“°︎ r/AskStatistics
πŸ’¬︎
πŸ“…︎ Mar 13 2021
🚨︎ report
Cumulative probability distribution not converging? (Statistics)

I am given the continuous joint prob. density function f(x,y) = { e^(-(x+y)) if x >= 0, y >= 0; 0 everywhere else}. Now, we can see that it satisfies that it is greater or equal to 0 in all points, and, when double-integrating from 0 to infinity it gives 1.

I am asked to find the density function of U=X+Y, which I have done and results in a marginal density f_U(u) = { ue^(-u) if u >= 0; 0 everywhere else}. Moreover, now I'm asked to find the cumulative distribution function of this new variable U.

I had guessed that, to do so, I had to integrate the original function f(x,y), from 0 to u-y, and from 0 to infinity (dxdy). This however is not converging. Is this the wrong approach?

πŸ‘︎ 2
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/rbojon
πŸ“…︎ Mar 18 2021
🚨︎ report
Probability distributions and their relationships
πŸ‘︎ 342
πŸ“°︎ r/Infographics
πŸ’¬︎
πŸ‘€︎ u/ElmerMalmesbury
πŸ“…︎ Jan 08 2021
🚨︎ report
Probability Distribution in Monopoly Using Markov Chains
πŸ‘︎ 149
πŸ“°︎ r/math
πŸ’¬︎
πŸ‘€︎ u/challenging-luck
πŸ“…︎ Jan 27 2021
🚨︎ report
[Grade 12 Data Management: Normal Distribution] How do I find the probability of this question

Given X ~ N(0,1);

find P(x=0.5).

The answer my teacher gave was 1-P(x<0.5)-P(x>0.5) which is equal to 0. But I want to know the explanation to how the answer is found.

πŸ‘︎ 4
πŸ“°︎ r/HomeworkHelp
πŸ’¬︎
πŸ‘€︎ u/Wafinator
πŸ“…︎ Mar 22 2021
🚨︎ report
Probability Distribution in Monopoly Using Markov Chains v.redd.it/hmo1meye63d61
πŸ‘︎ 219
πŸ“°︎ r/3Blue1Brown
πŸ’¬︎
πŸ‘€︎ u/challenging-luck
πŸ“…︎ Jan 23 2021
🚨︎ report
[Statistics] Probability Using Exponential Distributions

Hey friends,

I've been stuck on this problem for quite a bit, and could really use some guidance.

The time (in minutes) between arrivals of customers to a post office is to be modelled by the Exponential distribution with mean 0.72.

What is the probability that the time between consecutive customers is less than 15 seconds?

For some reason using exponential distributions in this manner has really been throwing me through a loop, and I'm not entirely sure how to start. I've rewatched my lecture videos, and its just not clicking. Does anyone have some extra resources that could help, or guidance on the problem? Personally, I would prefer to understand the mechanics of the topic, because I would really like to move on.

πŸ‘︎ 3
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/gotmycarstuck
πŸ“…︎ Mar 18 2021
🚨︎ report
Probability Distribution in Monopoly Using Markov Chains
πŸ‘︎ 187
πŸ“°︎ r/mathpics
πŸ’¬︎
πŸ‘€︎ u/challenging-luck
πŸ“…︎ Jan 27 2021
🚨︎ report
normalised cumulative distribution function from array of probability density function

For a continuous variable x and its probability density function p(x), I have a numpy array of x values x and a numpy array of corresponding p(x) values p. p(x) is not normalised though, i.e. in a plot of p(x) against x, the area under the graph is not 1. I want to calculate a corresponding array for values of the cumulative distribution function cdf. This is how I'm currently doing it, using the Trapezoidal rule to approximate an integral:

pnorm = p/np.trapz(p,x)

cdf = np.array([np.trapz(pnorm[:n],x[:n]) for n in range(len(pnorm))])

The results aren't entirely accurate; the final value of cdf is close to 1 but not exactly 1.

Is there any more accurate and simple way of normalising p and finding cdf? I thought there might be specific functions for this in some module; perhaps a statistics-oriented module with functions for related parameters (variance, confindence intervals etc) as well?

πŸ‘︎ 2
πŸ“°︎ r/learnpython
πŸ’¬︎
πŸ‘€︎ u/O_I_GR
πŸ“…︎ Mar 11 2021
🚨︎ report
[E] A map of probability distributions and their relationships

In this paper from 2012 (full text here), Leemis and McQueston show a diagram of how probability distributions are related to each other. As I liked it really much, I extracted the chart from the pdf, turned it into a poster, and printed a giant version of it to stick on the wall of my apartment. I thought I would also share it here:

https://www.telescopic-turnip.net/documents/figures/distributions.pdf

For example, if you take the sum of an infinite number of (i.i.d) Bernoulli-distributed variables, you end up with a normal distribution. This chart just extends the concept to all kinds of distributions and shows how you can go from one of them to another by changing some parameter, or by combining multiple variables etc.

Further explanation and examples can be found here.

πŸ‘︎ 193
πŸ“°︎ r/statistics
πŸ’¬︎
πŸ‘€︎ u/ElmerMalmesbury
πŸ“…︎ Jan 08 2021
🚨︎ report
How does sample size affect probability in a normal distribution (part of multi-step problem).

http://imgur.com/a/6KEWVNL

Problem 8. I know that as sample size increases, std dev decreases and probability decreases. However my gut says D and I don't know why.

This problem is based on the prompt in the first image for problems #5-10. It is specifically asking about #7Β΄s answer.

Does probability decreases EVERY TIME sample size increases in a normal distribution (choice A), or does it depend on other factors (D)? Any help?

πŸ‘︎ 2
πŸ’¬︎
πŸ‘€︎ u/EpicGusher
πŸ“…︎ Mar 21 2021
🚨︎ report
Estimating a probability density distribution from spatially autocorrelated data

My apologies if this isn't the right sub to ask this question, because of the necessary knowledge on geostatistics. Please let me know if this is the case.

I am interested in finding the probability density distribution of last day of frost within multiple years. The last day of frost is the last day of the year (DOY) where the minimal daily temperature reaches freezing temperature (32F or 0C). For where I am located, this is typically around the end of April, so the last day of frost would be between DOY 110 - 130. I have one data per year for 41 years, so 41 data points in total to estimate the function (example of a dataset : 112, 130, 135, 120... all the way up n =41).

The tricky part is the following: I have this last day of frost for multiple towns (around 800 in total), but I also have these zones that group up these towns based on having similar last day of frosts. The way this grouping was done is unrelated to the matter at hand (it was done by k-means clustering) and the important part is that there are currently 8 pre-defined zones. Example: zone 1 would have 50 towns, zone 2 would have 125, and so forth.

What I want is the probability density function for the last day of frost for each of those specific zones. This find this function, I thought of two ways for doing this:

  1. Mean the last day of frost for each town within the zone. This would give me 41 data points in total from which to estimate the density function.
  2. Consider the last day of frost for each town within the zone. Therefore, zone 1 that has 50 towns would mean 50 * 41 years = 2050 data points.

The option 2 makes estimating a probability density function much easier, because of how much more data is available, but the issue that I am seeing with this relates to spatial autocorrelation. I am practically estimating a density function by taking autocorrelated data, which isn't good because the data is supposed to be IID.

Option 2 gives me a normal distribution, which is what I expect from the results. Option 1 gives me something pretty random.

One β€œexcuse” I thought of for using option 2 is this : since the zones are based on regrouping towns that share similar last day of frost, can't we just say that there is a theoretical distribution for each zone and that the data points we obtain for each town contained within these zones are possible samples from said distribution? This would be the equivalent of using a Monte Carlo simulation from a distribution and generating mu

... keep reading on reddit ➑

πŸ‘︎ 16
πŸ“°︎ r/AskStatistics
πŸ’¬︎
πŸ‘€︎ u/MorningGlory747
πŸ“…︎ Feb 11 2021
🚨︎ report
probability distribution for random values that are in the [0,1] interval

I need a probability distribution to model ratio’s that are between [0,1]. It would also be helpful if that distribution is symmetrical. Standard normal for example won’t work since it allows for x values below zero, which messes up my calculations. Log normal distribution is skewed which isn’t that great for my purpose. Anyone got some suggestions?

πŸ‘︎ 2
πŸ“°︎ r/quant
πŸ’¬︎
πŸ‘€︎ u/durraniali
πŸ“…︎ Feb 26 2021
🚨︎ report
[Q] why intuitively should simulations of probability distributions end up converging to a theoretical distribution?

Lets say I am just doing a simulation of roling dice, and then derive this theoretically, and then simulate it on a computer. as the number of simulations increases, the simulated distribution converges to the theoretical one. I think i understand this on the surface, but I am wondering if there is a deeper intuition- is it because the theoretically derived one is literally the 'truth'- so if you simulate it on a computer, it is dictated/governed by the theoretical distribution?

πŸ‘︎ 4
πŸ“°︎ r/statistics
πŸ’¬︎
πŸ‘€︎ u/Whynvme
πŸ“…︎ Feb 18 2021
🚨︎ report
A collection of graphs to demonstrate how the probability distribution of shots works in Phoenix Point (more in the comments)
πŸ‘︎ 59
πŸ“°︎ r/PhoenixPoint
πŸ’¬︎
πŸ‘€︎ u/4-Vektor
πŸ“…︎ Jan 05 2021
🚨︎ report
Saw this on Amazon... Uhh... did they seriously get the multimodal distribution wrong? (lol Bimodal). Since when is the Alpha-Phase Tetroencubator ever NOT been shipped with trimodal probability distribution to read local maxima density peaks?
πŸ‘︎ 16
πŸ“°︎ r/VXJunkies
πŸ’¬︎
πŸ‘€︎ u/mrtuxedo9
πŸ“…︎ Feb 24 2021
🚨︎ report
[Q] Reduce the value of a and b in Beta function without changing the probability distribution

I wanted to use the beta function in my ML model (Bayesian Bandit) and in that the probability of an event keeps on changing as I get more data. But after a while the value of a and b will become so large that the program won't be able to handle (if the event was favorable I would increase value of a by 1 and if not increase value of b by one).

Any idea on how I can reduce the values of a and b ?

πŸ‘︎ 7
πŸ“°︎ r/statistics
πŸ’¬︎
πŸ‘€︎ u/therobinhood7
πŸ“…︎ Feb 17 2021
🚨︎ report
[AP Statistics - Sample Distributions/Probability] I cant seem to figure out this bottom question...Can this be explained in detail?

https://preview.redd.it/9m3ormtrjhl61.jpg?width=640&format=pjpg&auto=webp&s=84bd95e4654ce2d11b856fe57a510b081d658742

πŸ‘︎ 8
πŸ“°︎ r/HomeworkHelp
πŸ’¬︎
πŸ‘€︎ u/_sercio
πŸ“…︎ Mar 06 2021
🚨︎ report
On pseudorandom number generation given a probability distribution

Can we generate pseudorandom numbers from a given probability distribution by using the uniform random generator that is built-in for most programming languages? If so, is there any reason why it happens? If not, how can we generate pseudorandom numbers from a given distribution?

πŸ‘︎ 75
πŸ“°︎ r/compsci
πŸ’¬︎
πŸ‘€︎ u/deybamayana
πŸ“…︎ Dec 30 2020
🚨︎ report
[Statistics] I'm learning about "Poisson Distribution" in stats, and I'm very confused about the notation. It seems that p(x; mu) is the probability obtained from one of these, but F(x; mu) seems to be something different entirely but is never explained in my lecture.

I would try to explain the context, but a picture is probably better.

What is this mysterious value F(x; mu) and how is it different from p(x; mu)? Never once in any of this presentation does she show her work for how she got F(x; mu) or even give it a name. What is this value and how am I supposed to use it?

Here's an example problem that can apparently be obtained two ways, one with p() and one with F(). What is she doing here?

πŸ‘︎ 2
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/Boneless_Blaine
πŸ“…︎ Feb 19 2021
🚨︎ report
Probability distributions for the first 36 eigenstates of a particle trapped in a heart-shaped potential well!
πŸ‘︎ 17
πŸ“°︎ r/ScienceImages
πŸ’¬︎
πŸ‘€︎ u/hudsmith
πŸ“…︎ Mar 03 2021
🚨︎ report
Question related to joint probability distribution when a pair of die is rolled.

Let a pair of die is rolled. Assume that X denotes the smallest value and Y denotes denotes the largest values. Find joint mass function of X and Y.

I understand that when x=y, then the joint function will be f(x, y) = 1/36, where x=y=1, 2, 3 , 4, 5, 6. But when x and y are different ( according to the condition x<y), then how f(x, y) = 2/36? Could anyone please explain how we get 2/36 for each order pair where x is smaller than y? Thanks

πŸ‘︎ 2
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/zeeshas901
πŸ“…︎ Mar 03 2021
🚨︎ report
Expected value of binomial distribution is the mode(has highest probability of occurring). Prove?

I've taken a look at binomial distribution graphs and have noticed that the expected value turns out to be the mode(I also realise multiple modes are possible). I'm pretty sure others would have noticed this as well. This also makes sense because in theory, the expected value(as the number of trials approaches infinity)of a binomial event should have the maximum number of successes.

However, is there an analytic way of proving this via the following?

Given X~B(n,p)

E(X) = np (Round off if needed, assume np results in an integer if you want for this specific proof)

Prove P(X=E(X)) has the highest probability

Or: prove nCi * p^(i) * (1-p)^(n-i) is highest for i=E(X) for 0 ≀ i ≀ n

In other words, prove the binomial distribution function maximises when number of successes = np.

Thanks.

Edit: Note that my conjecture in this post could be wrong(E(X) not necessarily being the mode), in which case I'd be interested in understanding how if possible.

πŸ‘︎ 2
πŸ“°︎ r/learnmath
πŸ’¬︎
πŸ‘€︎ u/xZakurax
πŸ“…︎ Mar 02 2021
🚨︎ report
Probability distribution problem

https://imgur.com/89xSTMa

πŸ‘︎ 3
πŸ’¬︎
πŸ‘€︎ u/nonga9
πŸ“…︎ Mar 20 2021
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.