Adjoint methods, how does it work and how to physically interpret the system.

Hi all, I am looking at an optimisation problem where I want to optimise a geometry using the adjoint method. Physically I'm solving Maxwell's equations around a geometry using FDTD. My question is I more or less understand that you can write Maxwell's equations as a linear set of differential equations but what I don't understand is how you physically interpret what the adjoint of that linear equation is doing to the system. I've read a few papers that claim the adjoint system is just solving the backwards propagation of the original system. But I don't see why that would be the case. I was wondering if someone could give a sort of intuitive idea of what an adjoint does to a system.

Sorry if question is broad and vague, that's mostly because I'm still a little confused by it.

👍︎ 3
📰︎ r/learnmath
💬︎
📅︎ Oct 16 2020
🚨︎ report
Discrete adjoint method

Hello everyone,

I have a masters in mechanical engineering and followed some courses related to numerical methods and optimizations. Having participated to a lecture on optimization, I stumbled upon the adjoint optimization method which caught my attention.

From what I understood, the discrete adjoint optimization method allows you to optimize very complex problems using softwares that already exist. For example, adjoint discrete method is used to perform shape (topology) optimization of e.g. airfoils using CFD results without the need to modify the CFD solver in itself.

So I would like to learn and understand the adjoint method. My idea would be to apply the method to e.g. the shape optimization of a trebuchet. Indeed, I can easily obtain and solve the trebuchet equations and the objective is straightforward : optimize the few design variables so that the thrown distance is maximized. Obviously I know I don't need the adjoint method in such a case since I imagine one can derive and solve the issue analytically. But it would be interesting to practice on an "easy" toy problem.

Could some of you provide me references/books/tutorials/MOOC to reach this objective ?

Thanks !

👍︎ 6
💬︎
📅︎ May 10 2019
🚨︎ report
[R]ICML2020, Adaptive checkpoint adjoint (ACA): an ODE solver for accurate gradient estimation of Neural-ODE, and for the first time to enable Neural-ODE to achieve ResNet-level accuracy

Hi everyone, you might be interested in our ICML2020 paper if you are doing Neural ODE or time-series modeling.

link to paper: https://arxiv.org/abs/2006.02493

-------------------------------------------------------------------------------------------------------------------------------------------------

  • We identify the numerical error with the adjoint method in the gradient estimation of Neural ODE.
  • We propose Adaptive Checkpoint Adjoint to accurately estimate the gradient in NODE. In experiments, we demonstrate NODE training with ACA is both fast and accurate. To our knowledge, it’s the first time for NODE to achieve ResNet-level accuracy on image classification with adaptive solvers.
  • We provide a PyTorch package https://github.com/juntang-zhuang/torch_ACA, which can be easily plugged into existing models, with support for multi-GPU training and higher-order derivative.

-------------------------------------------------------------------------------------------------------------------------------------------------

Results on Cifar image classification

https://preview.redd.it/kzyabmbq7b551.png?width=1844&format=png&auto=webp&s=ae6fb145df62a08ad86829e93b3c88bbb65b80ee

Demo of numerically solving a three-body problem (unknown mass) with ACA for parametric ODE

https://reddit.com/link/ha8u7h/video/p2w82k4t7b551/player

👍︎ 91
💬︎
📅︎ Jun 16 2020
🚨︎ report
[R] Implementation of Neural Ordinary Differential Equations [slides + notebooks + code]

Hi, sharing with my slides and notebooks on NeuralODE. During my talk I put stress on explaining what are ordinary differential equations, how to solve them numerically (how to implement simple black box solver), how to integrate ODE when problem function is given by Neural Network, how to compute gradients with adjoint method vs naive approach. Finally, what are Continuous Normalizing Flows derived in the paper.

Link to repo: github/2019-03-Neural-Ordinary-Differential-Equations

In the repo you can find:

  • tensorflow implementation of NeuralODE (eager mode + keras API) - however for the sake of simplicity I implemented only few fixed grid solvers i.e. Euler, RK2 and RK4
  • jupyter notebooks which show how to implement black-box ODE solver, integrate NN with it, how to use adjoint method to optimize bullet trajectory etc
  • re-implementation of Continuous Normalizing Flows:

Processing img pjoslv7xrto21...

👍︎ 247
💬︎
📅︎ Mar 28 2019
🚨︎ report
[October] Discussion Topic Vote
👍︎ 2
📰︎ r/CFD
💬︎
📅︎ Sep 29 2020
🚨︎ report
[P] PyTorch implementation of EventProp: Backpropagation for Exact Gradients in Spiking Neural Networks

EventProp has recently showed up on arXiv as a method to compute exact gradients for Spiking Neural Networks, enabling easier training with gradient methods. I gave a shot at implementing it in PyTorch and seeing how it performs on MNIST (the original paper only presents experimental results on synthetic data) -- here are a few results:

Left: internal state of an output neuron (it emits a spike whenever V surpasses 1) during a forward pass. Right: the neuron's adjoint variable used to compute gradients. (Compare with Fig. 1 in the paper)

Training curve of a single-layer SNN trained on MNIST for 40 epochs, achieving ~85% train/test accuracy.

Github: https://github.com/lolemacs/pytorch-eventprop

Paper: https://arxiv.org/abs/2009.08378

Abstract:

We derive the backpropagation algorithm for spiking neural networks composed of leaky integrate-and-fire neurons operating in continuous time. This algorithm, EventProp, computes the exact gradient of an arbitrary loss function of spike times and membrane potentials by backpropagating errors in time. For the first time, by leveraging methods from optimal control theory, we are able to backpropagate errors through spike discontinuities without approximations or smoothing operations. As errors are backpropagated in an event-based manner (at spike times), EventProp requires storing state variables only at these times, providing favorable memory requirements. EventProp can be applied to spiking networks with arbitrary connectivity, including recurrent, convolutional and deep feed-forward architectures. While we consider the leaky integrate-and-fire neuron model in this work, our methodology to derive the gradient can be applied to other spiking neuron models. We demonstrate learning using gradients computed via EventProp in a deep spiking network using an event-based simulator and a non-linearly separable dataset encoded using spike time latencies. Our work supports the rigorous study of gradient-based methods to train spiking neural networks while providing insights toward the development of learning algorithms in neuromorphic hardware.

N

... keep reading on reddit ➡

👍︎ 15
💬︎
📅︎ Sep 23 2020
🚨︎ report
need help checking answers (matrices)

Question:

A manufacturer produces 3 types of candies, namely, A, B, and C. The ingredients used for these candies are grapes, mangoes and strawberries. To produce one unit of A requires 4 grams of grapes, 2 grams of mangoes and 3 grams of strawberries. To produce one unit of B requires 3 grams of grapes and 4 grams of mangoes and 3 grams of strawberries. To produce one unit of C requires 1 grams of grapes, 4 grams of strawberries and 5 grams of mangoes. The manufacturer has an everyday supply of 250 kg of grapes, 450 kg of mangoes and 380 kg of strawberries. Let a, b, and c be the quantity of A, B and C (in thousand units) respectively.

a. Form a system of linear equations from the information above.

(equation 1) 4a + 3b + c = 250 (equation 2) 2a + 4b + 5c = 450 (equation 3) 3a + 3b + 4c = 380

b. Write a matrix equation for the system of linear equations obtained in (a).

AX=B

4 3 1 a 250 ( 2 4 5) ( b ) = ( 450 ) 3 3 4 c 380

[A] top: 4 3 1 middle: 2 4 5 bottom: 3 3 4

[X] top: a middle: b bottom: c

[B] top: 250 middle: 450 bottom: 380

с. Use Gauss Elimination method to find the inverse matrix, hence, find the quantity of A, B and C (in thousand units).

adjoint matrix:

4 3 1 ( 2 4 5 ) 3 3 4

top: 4 3 1 middle: 2 4 5 bottom: 3 3 4

cofactor matrix:

1 7 -12 ( -9 13 -3 ) 11 -18 10

top: 1 7 -12
middle: -9 13 -3 bottom: 11 -18 10

matrix determinant: 247 *using adjoint matrix= |A| = a11 |c11| + a12 |c12| + a13 |c13|

inverse matrix:

A^-1 = 1 4 3 1 — ( 2 4 5 ) 247 3 3 4

A^-1 = 1 ÷ 247 ( 431 245 334 )

top: 4 3 1 middle: 2 4 5 bottom : 3 3 4

**when they asked “Let a, b, and c be the quantity of A, B and C (in thousand units)” does that mean changing the numbers from gram into kg? like 4000g = 4kg?

EDIT: reddit wouldn't let me write in the correct format now it's all confusing :(

👍︎ 2
📰︎ r/learnmath
💬︎
📅︎ Oct 17 2020
🚨︎ report
I thought the point of composer and autoloading was that a file wouldn't be included until it was needed? Why is get_included_files showing me ~100 includes from my vendor directory on an empty page?

As title. I have a page that does nothing, other than output get_included_files(), and it's showing me nearly 100 files from my vendor directory:

  • vendor/autoload.php
  • vendor/composer/autoload_real.php
  • vendor/composer/ClassLoader.php
  • vendor/composer/autoload_static.php
  • vendor/symfony/polyfill-mbstring/bootstrap.php
  • vendor/symfony/polyfill-ctype/bootstrap.php
  • vendor/symfony/polyfill-php73/bootstrap.php
  • vendor/phpstan/phpstan/bootstrap.php
  • vendor/symfony/polyfill-php72/bootstrap.php
  • vendor/symfony/polyfill-intl-idn/bootstrap.php
  • vendor/guzzlehttp/promises/src/functions_include.php
  • vendor/guzzlehttp/promises/src/functions.php
  • vendor/ralouphie/getallheaders/src/getallheaders.php
  • vendor/symfony/polyfill-php70/bootstrap.php
  • vendor/guzzlehttp/psr7/src/functions_include.php
  • vendor/guzzlehttp/psr7/src/functions.php
  • vendor/symfony/var-dumper/Resources/functions/dump.php
  • vendor/clue/stream-filter/src/functions_include.php
  • vendor/clue/stream-filter/src/functions.php
  • vendor/php-http/message/src/filters.php
  • vendor/guzzlehttp/guzzle/src/functions_include.php
  • vendor/guzzlehttp/guzzle/src/functions.php
  • vendor/laminas/laminas-zendframework-bridge/src/autoload.php
  • vendor/laminas/laminas-zendframework-bridge/src/Autoloader.php
  • vendor/laminas/laminas-zendframework-bridge/src/RewriteRules.php
  • vendor/markbaker/complex/classes/src/functions/abs.php
  • vendor/markbaker/complex/classes/src/functions/acos.php
  • vendor/markbaker/complex/classes/src/functions/acosh.php
  • vendor/markbaker/complex/classes/src/functions/acot.php
  • vendor/markbaker/complex/classes/src/functions/acoth.php
  • vendor/markbaker/complex/classes/src/functions/acsc.php
  • vendor/markbaker/complex/classes/src/functions/acsch.php
  • vendor/markbaker/complex/classes/src/functions/argument.php
  • vendor/markbaker/complex/classes/src/functions/asec.php
  • vendor/markbaker/complex/classes/src/functions/asech.php
  • vendor/markbaker/complex/classes/src/functions/asin.php
  • vendor/markbaker/complex/classes/src/functions/asinh.php
  • vendor/markbaker/complex/classes/src/functions/atan.php
  • vendor/markbaker/complex/classes/src/functions/atanh.php
  • vendor/markbaker/complex/classes/src/functions/conjugate.php
  • vendor/markbaker/complex/classes/src/functions/cos.php
  • vendor/markbaker/complex/classes/src/functions/cosh.php
  • vendor/markbaker/complex/classes/src/functions/cot.php
  • vendor/markbaker/complex/classes/src/functions/coth.php
  • vendor/markbaker/complex/classes/src/fun
... keep reading on reddit ➡

👍︎ 4
📰︎ r/PHP
💬︎
👤︎ u/codemunky
📅︎ Jul 09 2020
🚨︎ report
Making custom subclasses of LinearOperator

Hi all,

I've been working on a physics peoject which uses SGD to find global minima of a potential function(a landscape, essentially). So far, the limited code works okay, but i want to extend and systematize it. I have a bunch of complex variables(matrices) which i'd like to define as classes, or rather, as subclasses of the tf.linalg.LinearOperator, so that I inherit the methods(adjoint, inverse, is_diagonal, etc. ) while also retaining the high-performance aspects of the library.

Problem is, I am not that well versed in writing subclasses, and can't make sense of the docs of the other subclasses. Could someone point me to a minimal implementation of it? Any kind of advice would be appreciated. Also let me know if it is pointless, and i can achieve all that by just simply defining a matrix object in tf.

👍︎ 2
📰︎ r/tensorflow
💬︎
📅︎ Apr 23 2020
🚨︎ report
[R] Neural ODEs

We have recently developed an Adjoint based Neural ODE (ANODE) which computes unconditionally accurate gradients for Neural ODEs. This is very important as the approach presented in arxiv:1806.07366 is numerically unstable and may result in divergent training (in several cases we observed >20% accuracy degradation because of this)

Link to Pytorch code:

https://github.com/amirgholami/anode

Link to papers:

https://arxiv.org/pdf/1902.10298.pdf

https://arxiv.org/pdf/1906.04596.pdf

We hope this library would be helpful. Please let us know if you have any feedback and feel free to reach out if there was any questions

Update: There have been some questions whether using adaptive solvers can address the instability of solving ODEs backwards in time. Please kindly note that this is incorrect. Using an adaptive solver cannot resolve instability of solving an unstable ODE backwards in time. Please see Figure 7 of our paper which clearly shows an adaptive solver cannot address instability. This claim stems from a misunderstanding of how adaptive solvers work. Below we provide a detailed explanation which we hope would address this misunderstanding:

To solve an ODE numerically we need to choose a step size. ODEs can either be solved with a fixed step size or an adaptive step size. Fixed step sizes use the same step size in every iteration, whereas adaptive methods change the step size dynamically. Ideally, we want to use a large step size so we can solve the ODE with the smallest number of iterations. However, the final error depends on the step size. Using a large step size can lead to a large error. An alternative approach is to use an adaptive solver. Adaptive solvers change the step size dynamically to control error associated with step size. Please note this is irrelevant to making an ODE which is unstable for any step size stable, which is what the Neural ODE paper's algorithm claims. Please note that this is incorrect. That is one cannot solve an unstable ODE and make it stable with an adaptive solver. For example, a famous adaptive solver is RK45. This solver first computes the solution of one time step, with RK4 which is a fourth order method, and RK5 which is a fifth order method. If the step size is chosen properly, the error between these two solutions should be small. Adaptive solvers, measure this err

... keep reading on reddit ➡

👍︎ 170
💬︎
📅︎ Jun 26 2019
🚨︎ report
[D] TensorFlow implementation of Neural Ordinary Differential Equation

This was done by Pascal Voitot (@mandubian) . He implemented tensorflow version of NeuralODE. I have only gone through the project and not tried it myself.

https://github.com/mandubian/neural-ode

👍︎ 17
💬︎
👤︎ u/begooboi
📅︎ Mar 07 2019
🚨︎ report
Is it normal that I find most exercises unmotivated?

I am nearing the end of my second semester and I have never felt like I have been given any "interesting" problems. The exercises in books, the homework problems, the readings etc. all focus on building a unified theory of whatever subject I am learning. I am not saying this is easy, a lot of problems can be very challenging and that's fine but everything feels very dry. I am talking about problems like "Show that if T is a normal operator in a complex vector space V, then there exists a p in R[x] with p(T) = T^* and deg(p) = dim(V) - 1, where T^* denotes the adjoint of T." The solution requires a simple use of the spectral theorem. Of course, this shows me an application of the spectral theorem and thus helps justify(?) its existence as a useful problem solving method but I don't see the point of the problem. Of course my feelings aren't specific to linear algebra.

On the other hand I imagine a high school student asking similar questions for subjects he is learning that I might find completely relevant and motivated.

With all that said, I don't blame the mathematics but there is an apparent lack of motivation for subjects and exercises. Certain books like Hadlock's "Field Theory and Its Classical Problems" and Simmons' "Differential Equations with Applications and Historical Notes" successfully propose interesting problems of historical interest and use them to motivate the subject but needless to say they are rare and most of the time I am still left wondering what might the purpose of a particular exercise be? At this point I only ever do them to test my understanding.

So, to cut it short I have two questions.

  • Do I have a false perspective, am I failing to do something right?

  • Is there anything I can do about it or do certain things only make sense after having learned even more?

Any advice and comment is appreciated.

👍︎ 14
📰︎ r/math
💬︎
📅︎ Jul 19 2019
🚨︎ report
Returning to CSE maths after 4 years after High School

EDIT: Could a mod correct my mess in the topic name?

This is a long one... bear with me. If you will.

I am going into the second year of a Computer Science degree and we have a course called "Engineering Mathematics" (ME3) in the next semester.

I graduated high school a WHILE ago and honestly need a brushing up before I start learning ME3. But I don't have the time to go through all the maths topics we had then in all the 4 years. I was wondering if someone could help me decide what I should revisit and revise before going on to ME3.

Course Content of ME3

------------------------------------

1 - Linear Differential Equations (LDE)

\- LDE of nth order with constant coefficients

\- Method of variation of parameters

\- Cauchy's & Legendre's LDE

\- Simultaneous & Symmetric Simultaneous DE

\- Modelling of Electric Circuits

2 - Transforms

\- Fouriers Transform 

	\- Complex exponential form Fourier series

	\- Fourier Integral Theorem

	\- Fourier Sine & Cosine Integrals

	\- Fourier Sine & Cosine transforms & their inverses

\- Z Transform (ZT)

	\- Standard Properties

	\- ZT of standard sequences & their inverse

3 - Statistics

\- Measures of Central tendency

\- Standard deviation, 

\- Coefficient of variation, 

\- Moments, Skewness and Kurtosis

\- Curve fitting: fitting of straight line

\- Parabola and Related curves

\- Correlation and Regression 

\- Reliability of Regression Estimates.

4 - Probability and Probability Distributions

\- Probability, Theorems on Probability

\- Bayes Theorem, 

\- Random variables, 

\- Mathematical Expectation

\- Probability density function

\- Probability distributions: Binomial, Poisson, Normal and Hypergometric

\- Test of Hypothesis: Chi-Square test, t-distribution

5 - Vector Calculus

\- Vector differentiation

\- Gradient, Divergence and Curl

\- Directional derivative

\- Solenoid and Irrigational fields

\- Vector identities. Line, Surface and Volume integrals

\- Green‘s Lemma, Gauss‘s Divergence theorem and Stoke‘s theorem

6 - Complex Variables

\- Functions of Complex variables

\- Analytic functions

\- Cauchy-Riemann equations

\- Conformal mapping

\- Bilinear t
... keep reading on reddit ➡

👍︎ 3
📰︎ r/learnmath
💬︎
👤︎ u/BhooshanAJ
📅︎ Apr 19 2019
🚨︎ report
Do you want this? /s

A multidisciplinary, multi-objective "launch vehicle optimizer", including:

  1. 3 DoF trajectory optimization and analysis tool w/ Missile Datcom copy & parser. Validation w/ other relevant software (within +- 1% for orbital dv).
  2. 6 DoF trajectory simulation software, for sensitivity analysis and better performance estimation. (easily upgradeable to more "interesting" stuff)
  3. A propulsion analysis module with rough power-cycle analysis (textbook stuff), combustion chamber design, cooling geometry exported in 3d, and sizing for all major feed elements (from MEV to Regs & other). Includes cost estimates.
  4. A simplified structural analysis tool and adjoint documentation for aerospace grade aluminium propellant tanks, expected welding deformations, manufacturing processes, costs according to tolerances & SF, & other.
  5. An avionics sizing and costing tool with available, ITAR free COTS equipment and analysis on the effect it's performance might have on overall vehicle performance (CEP, injection accuracy, etc.)
  6. An overall vehicle optimizer for minimum recurring cost, minimum development cost, long-term scalability ("can it do 4X more payload with few changes"), among some selectable objective variables.

And a point-design, 500 kg payload to a 3000 km range vehicle, including but not limited to:

  1. Engine test data, achieved performance & needed work to achieve flight-performance (less than 5% improvement), manufacturing methods used & potential replacements.
  2. A functional avionics architecture on mostly COTS equipment and less than 15k overall cost. Integration and testing documentation for all, and ofc parts lists & up to 2 non-itar replacements.
  3. Main propellant tank manufacturing and destructive test data (somewhat highly instrumented), easily accesible materials and credible path to low SF.
  4. 80 kN test stand, HWIL, and flight-vehicle control (TVC incl.) software. (A lot)
  5. Detailed CAD, blueprints, clean excels with all design data, etc. (300+ pages)
  6. A consolidated, 50 page document on how to to build the flight vehicle in less than 1 year at less than 0.25 Mln USD in BOM cost. References to further docs.

I'm taking offers, BTC only. Material would be ITAR classified if you were in the US, but this throaway account should be free to export it for $$$ . Pm for offers.

Thank you

Scud29

/s

👍︎ 4
📰︎ r/rocketry
💬︎
👤︎ u/scud29
📅︎ Apr 21 2019
🚨︎ report
Backpropagation for dummies codesachin.wordpress.com/…
👍︎ 40
📰︎ r/math
💬︎
📅︎ Dec 06 2015
🚨︎ report
Résultats des candidats "à casserole" d'LREM

LREM et quelques alliés.

Je vous laisse googler les détails pour chaque affaire. La plupart sont regroupées dans cet article.

  • Elus :

-Bruno Le Maire, Eure, 1

Sa femme employée comme assistante parlementaire probablement fictive.

-Richard Ferrand, Finistère, 6

Vous êtes au courant de l'affaire.

-Marielle de Sarnez, Paris, 11

Suspicions d'assistants parlementaires européens travaillant pour le parti MODEM à Paris.

-Bruno Bonnell, Rhône, 6

Selon Mediapart, l'entreprise Infogrames dont il a été directeur général a écopé d'une sanction de 40.000 euros devant l'Autorité des marchés financiers. Il est aussi pointé du doigt pour avoir domicilié des sociétés dans le Delaware, «petit État américain considéré comme un paradis fiscal» selon Libé.

-Claire O'Petit, Eure, 5

Celle qui dément avoir sollicité une investiture du Front national aux élections régionales a été interdite par le tribunal de Bobigny de «diriger, gérer, administrer ou contrôler directement toute entreprise commerciale ou artisanale, toute exploitation agricole ou toute personne morale (…) pour une durée de cinq ans», selon Mediapart.

Raciste.

-Pierre Cabaré, Haute-Garonne, 1

Son investiture a été suspendue par La République en marche après les révélations sur son inéligibilité entre 2003 et 2004. Sauf que le parti n'a envoyé personne à sa place et il a continué à faire campagne avec des documents ou figuraient le nom En marche... Il est noté comme "REM" sur le site du ministère de l'intérieur donc on va voir s'ils lui retirent son étiquette ou pas.

-Corinne Vignon, Haute-Garonne, 3

Elle fait l'objet d'une enquête ouverte par le parquet de Toulouse qui précise à BuzzFeed News que Corine Vignon aurait «exercé une activité d'astrologie non déclarée».

-Alain Péréa, Aude, 2

Selon Le Midi libre, il a été épinglé en 2010 par la Chambre régionale des comptes de Languedoc-Roussillon. Alain Péréa aurait été payé pour 6 mois de travail, soit 9000 euros, pour la rédaction d'une étude sur les Harkis et le logement qui n'a jamais été publiée.

-Buon Tan, Paris, 9

Une enquête préliminaire a été ouverte sur le candidat, après la plainte de la vice-présidente du conseil représentatif des associations asiatiques de

... keep reading on reddit ➡

👍︎ 91
📰︎ r/france
💬︎
👤︎ u/kccoc
📅︎ Jun 18 2017
🚨︎ report
My small town has a poison pen letter writer...or more acuratly a poison keyboard status writer.

I live in a small town in south East France and since several years now a facebook account has become a kind of poison-pen letter writer/ Blackmailer/ Whistle blower against the current mayor and overall any local government. She or he took the name of a "famous" noble lady who died in the late middle ages and publish photo, video, statement that compromise the mayor and his adjoint. I found it very interesting that she (I will say she because she assume a Woman's name) is still online. Even if she has often a point, she often use method that are far too aggressive and personal to my taste. I wonder who she is. That's what I wanted to share with you. What do you think of it ?

👍︎ 11
💬︎
📅︎ Jan 19 2018
🚨︎ report
Truly understanding Veach Thesis

Say I want to work on this light transport method as explained in the thesis of Eric Veach. Glancing through it, I see some operator formalism, self-adjoint operator explanation of light transport, path-space manifolds etc.

I have some grasp of manifolds but I really know nothing about functional analysis and advanced abstract linear algebra. What would be the prerequisites for understanding this thesis to the nuts and bolts details.

[Here is the thesis] (http://graphics.stanford.edu/papers/veach_thesis/)

👍︎ 6
💬︎
👤︎ u/Meguli
📅︎ Nov 23 2017
🚨︎ report
A List of companies that use Formal methods

I am looking for companies that actually use Formal methods. Do you know companies absent in a list? The ones I've found so far are:

Name Location Hiring Sector Source Remote OK?
Adjoint Boston, MA, USA No Finance Github
Cog Systems - - - Blog ?
Ethereum Switzerland - - Blog ?
Galois Portland, Oregon, USA Yes Consulting/Research ?
Kernkonzept Germany - - Site ?
Kaspersky Lab Moscow, Russia No Security/AV ?
Microsoft Redmond, USA - Software development Site ?
Spotify - - - Slides ?
Rockwell Collins USA, Cedar Rapids, Iowa - High Assurance Systems - ?
Trust in soft - - - Site ?
Trustworthy Systems - - - Site ?
JetBrains Saint Petersburg, Russia - - Site ?

Latest version of the list is here - https://gist.github.com/ligurio/4b8a647d9474ab90049a6b56c8c731e0

👍︎ 5
💬︎
👤︎ u/ligurio
📅︎ Oct 03 2017
🚨︎ report
Concerning the Fredholm Alternative and the adjoint/formal adjoint of linear differential operators, L, whose domain consists of analytic functions that are twice differentiable on the interval [a,b] and satisfy the linearly combined, mixed, Dirichlet and Neumann boundary conditions given by L.

My question is very simple. Given the adjoint of L (not to be confused with the formal adjoint of L), which in turn is given by the formal adjoint L+ such that the bilinear concomitant associated with L is 0, it is evident that if L acting on some function, call it φ, is a non homogeneous boundary value problem, say L[φ] = h(ζ), φ(ζ ) has a solution if, the adjoint exists, then there must be some function, ξ(ζ) that satisfies the boundary conditions that are a result of the bilinear concomitant associated with L being zero if and only if, the inner product of ξ(ζ) with h(ζ), (of course with respect to some weight function, call it ψ(ζ)), is zero; that is to say, given ξ(ζ) and h(ζ) are orthogonal with respect to ψ(ζ), there exists a solution to the non homogeneous, linear differential equation given by Lφ. Obviously this solution is not clearly evident and may not even have a closed form, with that being said, could one not simply use a modified Fourier-Hankel method to solve the system coupled of differential equations given by Lφ and L'𝜏 to obtain a solution by choosing the appropriate weight function, ψ(ζ), such that the inner product of h(ζ) and the inner product of h'(ζ) has an analytic solution? It seems obvious to me that this would yield a much more useful and applicable result.

Addendum: I know this is fairly mathematical, but this is an elementary problem that I feel not only has obvious repercussions in applied fields, but could be understood and intuited by any modern human.

👍︎ 13
📰︎ r/VXJunkies
💬︎
👤︎ u/Ralisis
📅︎ Mar 31 2017
🚨︎ report
What makes Self-Adjoint Operators special?

I just finished an introductory course to partial differential equations. We covered how to solve homogeneous problems by separation of variables and nonhomogeneous problems with the method of eigenfunction expansion.

At the heart of the course was solving the Sturm Liouville (S-L) eigenvalue problem to get your eigenvalues and eigenfunctions from the boundary conditions.

All semester when we solved the S-L problem we had to show that the differential operator and the boundary conditions made the operator self-adjoint, which I mastered, but the professor never really explained what exactly that tells us about the problem.

So my question is what is self-adjointness and why is it useful/important?

👍︎ 2
📰︎ r/learnmath
💬︎
📅︎ Aug 05 2013
🚨︎ report
Spectral theorem for unbounded operators

Hey there. I'm having a problem with proving the last step in the spectral theorem. Here's where I'm at:

For a self adjoint operator A, I have constructed a projection valued meassure (i.e., a map P from the Borel sets to the bounded operators on my Hilbert space, P(M) is a projection which additive on disjoint sets).

The projection satisfies for [; \phi, \psi \in H ;], [; \Omega \in B(R);]

[;<\phi,;][;R;]A[;(z)\psi>;]= [;\int_R;][;\frac{1}{\lambda -z};] d[;\mu_{\phi,\psi}(\lambda);]

where [;R;]A (z) is the resolvent of A in the point z, and [;\mu{\phi,\psi} ;] is the unique complex meassure (this is proven earlier) that makes this equation true.

Now I want to prove that

[; <\psi, A\psi>=\int_R \lambda d\mu_{\psi}(\lambda) ;]

where [;\mu_{\psi}=\mu_{\psi,\psi} is non negative.

If it should help, the book I'm using is Mathematical methods in quantum mechanics by G. Teschl.

Okay, I can't get latex to work, each time I try to make a subscrict (fx. R_A) it fucks up -.-.

👍︎ 3
💬︎
👤︎ u/Papvin
📅︎ Jan 15 2014
🚨︎ report
[September] Discussion topic vote
👍︎ 5
📰︎ r/CFD
💬︎
📅︎ Aug 31 2020
🚨︎ report
[March] Adaptive Mesh Refinement

As per the discussion topic vote, March's monthly topic is "Adaptive Mesh Refinement".

Previous discussions: https://www.reddit.com/r/CFD/wiki/index

👍︎ 12
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Mar 03 2020
🚨︎ report
[July] Discussion topic vote
👍︎ 4
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Jul 01 2020
🚨︎ report
[June] Discussion topic vote
👍︎ 4
📰︎ r/CFD
💬︎
📅︎ Jun 01 2020
🚨︎ report
[May] Discussion topic vote
👍︎ 4
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Apr 29 2020
🚨︎ report
[Discussion Topic Vote] February 2020
👍︎ 3
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Jan 28 2020
🚨︎ report
[Discussion Topic Vote] March 2020
👍︎ 8
📰︎ r/CFD
💬︎
📅︎ Feb 27 2020
🚨︎ report
[August] Adjoint optimization

As per the discussion topic vote, August's monthly topic is Adjoint optimization

👍︎ 12
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Aug 01 2018
🚨︎ report
[Discussion Topic Vote] January 2020
👍︎ 2
📰︎ r/CFD
💬︎
📅︎ Dec 30 2019
🚨︎ report
[Discussion Topic Vote] December 2019
👍︎ 5
📰︎ r/CFD
💬︎
📅︎ Dec 01 2019
🚨︎ report
[Discussion Topic Vote] October 2019
👍︎ 4
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Sep 29 2019
🚨︎ report
[Discussion Topic Vote] November 2019
👍︎ 3
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Oct 31 2019
🚨︎ report
[Discussion Topic Vote] September
👍︎ 3
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Aug 31 2019
🚨︎ report
[Discussion Topic Vote] July
👍︎ 2
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Jun 29 2019
🚨︎ report
[Discussion Topic Vote] August
👍︎ 2
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Jul 30 2019
🚨︎ report
[Discussion Topic Vote] May
👍︎ 5
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Apr 29 2019
🚨︎ report
[Discussion Topic Vote] June
👍︎ 2
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ May 30 2019
🚨︎ report
[Discussion Topic Vote] March
👍︎ 4
📰︎ r/CFD
💬︎
📅︎ Feb 27 2019
🚨︎ report
[Discussion Topic Vote] April
👍︎ 3
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Mar 30 2019
🚨︎ report
[Discussion Topic Vote] February
👍︎ 5
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Jan 26 2019
🚨︎ report
[Discussion Topic Vote] December
👍︎ 4
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Nov 26 2018
🚨︎ report
[Discussion Topic Vote] January
👍︎ 3
📰︎ r/CFD
💬︎
👤︎ u/Rodbourn
📅︎ Dec 23 2018
🚨︎ report

Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.