Background: no professional IT experience -- I'm a career changer, and have been studying/obtaining certs for the past year. This is my first CompTIA cert, and my primary interest in obtaining the Sec+ is because it's a standard cert that HR looks for in the positions I'm interested in.
Study Methodology: When I first started studying for the Sec+, I downloaded the objectives from CompTIA's website, and put into an Excel spreadsheet (domain# and topic name) and I added 3 columns: definition/explanation, example/purpose, and scenario. And as I studied Messer, Dion, Gibson, et al, I'd plug the relevant info into my spreadsheet entering only information that I didn't know, didn't remember, or confused easily with something else. Every now and then, I'd discover a conflict or error . . . some tidbit of information that was credited to an incorrect domain, or authors providing a different account of what/how a particular topic is defined. Those were interesting, and lead to a lot of googling to see if I could find a more definitive answer . . . and sometimes, there just isn't. That's probably why CompTIA uses a lot of "what is the BEST response" for a given scenario. I used Messer videos to further detail info that I didn't fully understand, and finished up with practice tests from Messer, Dion and Gibson. All of the practice tests were helpful; I particularly liked Dion's and Messer's practice tests. I would highly recommend checking out Gibson's site https://gcgapremium.com/501-extra-ptqs/ This turned out to be very helpful as were the sims at the beginning Messer's quizzes (should you choose to purchase it).
Day of the Exam: I figured that if I didn't already know it, I wasn't going to learn it in the next few hours, so I was done with studying. But I did sign on to Reddit to catch up on the past few days and read some words of encouragement. It's always nice to see when someone passes, and if they don't, the words of encouragement from others to keep on trying. I knew I'd be one of those posts by this afternoon . . .
The Exam: I got 81 questions - 5 of which were PBQs. They were all drop-down or click and drag. I spent the first 3 minutes trying to figure out how to do one of them . . . turns out the question on screen was covering the items that were to be dragged, so once the question was minimized, ta-da, it began to make sense. (Yeah, it was my first CompTIA cert)! I clicked "review" on 3... keep reading on reddit ➡
$AXDX SHORT SQUEEZE THESIS
$AXDX is a company that develops medical diagnostic products. It has been highly shorted by big firms, including Citron Research for a long time. They released a negative report on them in 2015 and ironically enough mentioned Theranos as the gold standard of quick diagnostic testing. If you dont know what happened with Theranos See Here. They should be very embarrassed about that because the target of the report should have been Theranos. $AXDX has proven to be an honest and successful company that creates technology that allows medical professionals to quickly determine if blood infections (sepsis) in patients will be resistant to certain antibiotics, which means doctors can choose a more effective antibiotic to give the patient and potentially save their life. Typically you had to culture the pathogen which takes a while and may not even work, while the patient is dying. They develop many revenue generating instruments and have received FDA approval for new products as recently as last September.
I think it is likely to squeeze because I can calculate that shorts have not been able to fully cover this stock since the most recent short interest data was released on the Nasdaq website. There's also no real reason for me not to hold this company long. Lets take a look.
The most accurate up to date short interest data (how many shares are shorted) for all stocks was released on 1-27-21, and that data was consolidated approximately 2 weeks prior on 1-15-21. The data is released every 2 weeks, and by the time it is released it is already approximately 2 weeks old. This is important to remember. Some stocks with low "days to cover" (how many days it would take for shorts to cover their positions) can already be covered by the time you even have access to the data. There are paid services that claim to have more updated numbers but the accuracy of their data is wildly debated. See linked reddit po... keep reading on reddit ➡
Does anyone know how DeepMind manages their research projects internally? I've heard vague things about setting milestones and a tracking system for who contributes what research ideas but not much beyond that. I'm guessing it's partly inspired by Agile development practices but adapted to the ML research setting. Any insights from those who know would be great as they seem to be particularly effective at churning out research.
While many technologies are often short-lived and superseded by new ones or newer versions, Agile methodology survived for more than 20 years practically unchanged and without big competition. I wonder how is it even possible! There are no better methodologies on the horizon yet? Will Agile methodology ever end?
(Trey Lance, Dak Prescott, DeSean Watson,)
An author of a book I teach (Jazz English, Gunther Breaux) emailed me with new books and his methodology. Full disclosure, I'm really not a fan of his previous book and I want to know what you think.
Here's his methodology.
Rinse and repeat. No grammar. Students will understand the questions because they are about themselves. (his words again)
I always assumed the methodology of the book just wasn't my style of teaching but now I feel that might not be the problem. What do you guys think?
Short video on the course books with examples Writing for Speaking Book Series
Short video on his methodology Conversation-based Learning
Those of you still not familiar with the composition of the PSYK ETF , here's how they arrived at the stock selection. The net asset of the fund is $4,650,861
You can find all info about the fund here...
I'm a social scientist and Zen Buddhist practitioner. This question is for people who are familiar with the philosophy of science and/or Buddhist metaphysics, but would love to hear thoughts anybody might have to offer:
I understand Buddhist metaphysical theory is extraordinary diverse across schools and these theories were not developed to guide scientific inquiry...however: metaphysics grounds research methodology by providing a framework to think about the nature of reality, human observation and perception, and the possibility of building knowledge. Most social sciences are rooted in realism (which assumes a reality independent of human observation) or constructivism (which, in its strongest forms, argues for multiple realities dependent on human observation). Such metaphysical theories rely on dualistic accounts of reality (subject-object) though Buddhism posits that this distinction is false and instead argues for non-dualism (again: this is my understanding from Zen Buddhism, it may differ across schools).
I was wondering if you anybody is aware of attempts to create a philosophy of science (or, more broadly, knowledge-building) that is rooted in Buddhist metaphysics and/or non-dualism? What assumptions would we make and what would such a methodology entail?
I know this is a highly specific and maybe even a technical question, but it's interesting to ponder and would be curious to hear thoughts.
edit: I’m receiving a lot of personal advice on how to use Buddhist philosophy in my life and, tbh, it wasn’t asked for. I know that this isn’t important in terms of personal practice. I also understand the limitations here. I’m looking to discuss potential application of Buddhist metaphysics for a philosophy of knowledge production and science. I might be a Buddhist but I’m also a social scientist trying to solve real-world problems here, and interested in seeing if there’s synergy in terms of metaphysical bases for action.
Robert Whittaker has a background in amateur freestyle wrestling having won 2 major wrestling tournaments in Australia ( Australian National Wrestling Championships and Australian cup). If you see Romero-Whittaker 1 fight you can see how good bobby's takedown defense is, now I'm not saying he can beat Adesanya with wrestling but he's a smart fighter and I think if he has a good gameplan he can pull it off. Silva like adesanya is also very ellusive with great footwork but Sonnen dominated Silva at UFC 117 for most of the fight with his wrestling and, heading into the fifth round, Sonnen was ahead on the judges' scorecards (40–34, 40–36, and 40–35).
I am new to Bear App, seems like a great note taking application. How does everyone use their tags? How do you organize them? Which methods do you use? I am a little lost on how to get started. I know how the app works, but I can't wrap my head around HOW to tag everything correctly.
First time FPLer here. I did stats in uni, and am a Data Analyst, so really keen to understanding the "underlying statistics" better.
From what I understand, xG/xA essentially look at all actions (presumably at high levels of football, and over the last 10 years or so) and evaluate historically how often those result in an assist/goal respectively.
Can anyone with a better understanding of the methodology chip in on some of the limitations, and what other metrics are alternatives? Look at my definition above, this ignores shot angle/strength, defender or goalie position to name a few - which can be pretty material.
I'm also very curious to hear how closely xG/xA actually predict real Goals and Assists!
I frequently see disgusting "racial theories" spread online, especially in far left and far right political forums that cite to these fraud "studies"...It is pure propaganda that has been disproven multiple times, yet some people still cite to these.
On my day off work/school today, I looked into these racist "sources" and saw the worst statistical methodology I've ever seen. I also learned a lot of other information you may find fascinating.
Here's what I gathered:
Richard Lynn's sample for Serbian IQ was 297 middle aged women from Sandzak, and used their scores as the IQ level for Serbia and Bosnia. The Serbian/Bosnian population is over 10 million. He went to a random village and used those few hundred people as representative of millions. Absolutely horrible statistics! Serbian and Bosnian IQ
Balkan IQ propaganda " The mean IQ for the Balkan countries of Serbia, Croatia, Romania, Bulgaria, Greece, and Turkey were 89, 90, 94, 93, 92, and 90" Then a few paragraphs later...."Why should the IQ scores and educational attainments be lower in the Balkans than elsewhere in Europe? Lynn (2006) suggested that one explanation is that the people in this region are a hybrid population who comprise a genetic mix between Europeans and Muslim Turks."
If Western and Northern European IQ is at 100, European Balkans at 90, and an even split hybrid population existed to "lower" scores, that would imply that Turkish IQ was originally 80 (Method he used for North/South Italy). Not only is this impossible because Turkey's PISA scores are significantly higher than most European Balkan countries (Turkish PISA), except for Slovenia and Croatia, and is getting closer to Western PISA levels, but Albanians, Serbians, Bulgarians, Slovenians, aren't actually "half Turkish", or even one tenth, and most get under 2% or nothing. Despite this basic fact from a massive genetic database we have today, Lynn and his racial Neo-Nazi friends(i'll get into that too) still peddle "racial theories" to undermine human beings.
[Albanian PISA](https://gpseducation.oecd... keep reading on reddit ➡
Do you know someone or personally using various renewal energy methodologies like rain water harvesting, local stone construction, mud construction, coconut hay constructions or any of these methodologies? How has your experience been.
For all having doubts how this wizardry is possible here is a video which tells u how many different ways are there to build a sustainable house
I am a former Kumon instructor and current independent tutor. I received a student seeking help on L81-84, and I personally was not exposed to this level as I worked with Junior and lower levels. I am familiar with the material as I have an engineering degree, but I am not sure what the Kumon methodology for it is. I do not want to guide the student astray, so I am looking for someone who can confirm/explain solution methodology for this. It might be resolved when I actually talk to the student based on the previous worksheets they did and what they learned in school, but I'm just trying to do my due diligence in preparing for our session.
- Trying to understand what is the expectation for the table on L81b since it is not the same size as the example, and y' at 0 is not 0 as in the example - answer to this is that it used the factoring of the y' to determine which x values in combination with the interval points
- Unsure if expectation is to graph for L82a or just solve with a table? - didn't know the best way to explain this question to the student. She was confused about it too lol
- Needing context on if these concepts are being taught for the first time or just reinforced since as I mentioned above, I'm really not too familiar with this level - really just need a breakdown of the way the levels are built/an informal training on upper level? My understanding is the only way to be an upper level instructor is to actually have completed those worksheets yourself. Is this accurate?
Edit: I met with the student, and she has the key, so we're okay.. but I would still appreciate some general knowledge on the upper levels :) I updated the questions I had accordingly.
A new blog post from Ellis Amdur primarily about Japanese martial arts and kata:
>[T]raditional Japanese martial arts have been practiced for hundreds of years by individuals, 99% of whom never experienced any sort of combative engagement. If a combative method is practiced without combative experience, it inevitably degenerates or changes into something else. Even without the anvil of war, if one doesn’t regularly pressure-test pattern-drills, they inevitably deteriorate, from generation to generation: elements of drama are added, or someone ‘innovates,’ not based on experience, but because, in their imagination, their innovation will work. Because such an individual is in authority, they are usually not challenged by their students, no matter how inane the methodology; their new method becomes the ‘real method,’ and elegant rationalizations are created to justify the technique.
Just got my first job at a small company where I am actually the only engineer. The VP has an associate's in MET or something like that but it's been a minute. Any way, he has been overseeing operations for going on 2 decades. He has the operations that are currently done down pat.
The problem is that I, being to fresh eyes with a mind for efficiency, see and hear about all the daily issues that happen and warranty issues that must be handled (smallish manufacturing company), and I can see many areas that are running way below what they could be. I took it upon myself to start studying six sigma and learn the concepts (actually finished my white belt cert today) so I can bring potential improvement to different areas.
The trouble I'm having is finding any papers, articles, or case studies on implementing a six sigma culture is small companies. When I say small think 1 engineer, 1 HR, 1 IT, 1 EHS, a handful of office workers, and like 70 total labor workers.
Is there a way to take advantage of the small size with process improvement? I know there is a ton of room for things to be better, I just don't know how to do it..yet 👍🏻 please tell me any ideas, suggestions, or resources you have.
In 1970, Detroit's local governmental & business elites concluded a comprehensive 5-year long study of the broader region that strongly alluded to and emphasized the future consolidation of the three-county area (Wayne, Oakland, and Macomb) into what it called the "Central Region". Including the city of Detroit, there would be 22 "boroughs" throughout the new megacity with space to spare for parks, farms, fields and other districts on the outskirts of the new city limits. If the current population of the combined counties created the megacity today, Detroit's population would nearly be 4 million (3.88 million with target borough pop. at 100,000 people), making it the second largest US city outside of NYC itself.
I decided early on that an easy way to plan consolidating all the little cities and townships would be to first figure out how much you'd wanna "grow" Detroit by, so, I picked the easiest and most likely cities to add onto it(poor, declining industrial towns in the same financial situation: Melvindale, Ecorse, River Rouge, Hamtramck, Harper Woods, Highland Park, Redford), and that coincidentally worked out to over 112k people, so that made figuring out a good size for consolidating the rest of the 140+ municipalities in the new city super easy since it let me know not to go under 100k or above 199k. That's why the borders shift and change orientation the further away the population spreads out from the central city, it doesn't indicate larger populations on the fringe. What do you guys think? Even though this is a very rough final plan, I'm pleased at how some of the numbers are turning out. If you're familiar with metro Detroit, I encourage you to analyze these boundaries and reference them to their political leanings in the last general election, and the region/metro-city's life expectancy demographics as well. Having a larger city of Detroit consisting of wealthy outer ring suburbs subsidizing the disinvested core through their property taxes while having political entities consisting of constituencies that have roughly equal political influence would make Metro Detroit a world leader in cutting edge regional/munici... keep reading on reddit ➡
I know everyone is big on Cathie Wood's ARK etfs - I have been looking at all of them closely to decide if I want to jump in or not. Does anyone know what type of criteria they might be using to make daily transactions? I understand that they have tons of industry experts and that they actively manage and do strong due-diligence but some of their transactions from today don't really make sense:
(1) ARKQ today sold nearly 18,000 shares of SPCE. As everyone knows, SPCE is about to do a flight test on Friday, Dec. 11 and many ppl believe the stock could rise significantly if the test goes well. So, why would ARKQ sell SPACE shares before the test? Do they somehow know something that other don't know? Is SPCE going to cancel the test so they sold all these shares today?
(2) ARKF bought nearly 19,000 shares of DOCU today on a day that JPMorgan downgraded the stock and the stock took a 6% share decline and is down after hours as well. What might be the ARKF reasoning to buy DOCU now when it seems its overvalued?
I don't really understand the concept of must bans in siege due to the fact that, not to state the obvious but since both teams cannot use the banned character how are there must bans or at least really common bans. For example, what is the value in banning a Jackal for example since although he may be good since both teams can use him then why does he feel like he's banned every game.
Personally I like to just be basic and use their role in the story as my placeholder, using names like the younger brother, the champion, the fortress, home etc...
What's your method? Do you have a smarter more creative way of doing it? Do you do it similar to how I do mine?
Imagine there's a disease (not COVID) that is currently contaminating 1 person in 1000 in your town.There's a test that is reliable at 99%.You go take the test for no other reason than curiosity (you are not a contact case, nor have symptoms).The test result is positive. Are you more likely contaminated or not?
If we go the standard SE route, we can see that the test itself is 99% reliable. In and of itself, this would be reliable enough to justify a belief that you are contaminated.
However that is not the whole truth, the probability "a priori" is missing in the equation here.
If we ask the exact same question but differently: Is the probability of being contaminated higher that the probability of a false positive?
The probability of being contaminated "a-priori" is 1/1000, whereas the probability of a false positive is 1/100. When comparing those two probabilities, we can see that the chance of a false positive is higher than the chance of being contaminated.
Even though the test was 99% reliable, you are in fact 10 times more likely to be a false positive.
I've seen multiple people in SE discussing that "extraordinary claims requires extraordinary evidence" and this is absolutely the concept that I am trying to address. Most of the SE discussing that, then goes on to say "God is extraordinary". But is that a justified assumption? For the eyes of the believer, God is absolutely ordinary. The fact that there would be no God would be the extraordinary claim in their eyes. They see order, and they don't get to witness order appearing out of chaos.
Because of that, the believer requires evidence that would be seen as unreliable for the non-believer, but for them, the perceived probability of a god existing is higher than the perceived probability of the evidence being wrong.We are in the case where a picture of somebody with a dog would be sufficient evidence to justify the belief that this person has a dog. Because the probability of just anyone having a dog is higher than the probability of the photo being fake.
This is why, only questioning the justification of the specific claim isn't always enough, you need to bring them to question their perceived probability "apriori".
Let's say we are discussing the claim that "Hydroxychloroquine cures COVID-19".Questioning the reliability of the studies is one thing. But we mustn't forget to ask them :
I'm looking for ressources about ransomware detection. i found a lot of "good practice" and "how to use our commercial ransomware protection", but not so much on how technically you can detect ransomware. If you had any advices and/or good ressources i would be grateful :)
I'm interested to understand the general methodology that other firms follow when penetration testing web applications. It would be great to get a consensus on what is considered best practice.
Do you build your methodology around the OWASP Web Standard Testing Guide or do you just focus on the OWASP top 10 (presuming you use OWASP at all) ?
Do you manually identify XSS vulnerabilities or do you rely on an automated scanner like that of Burpsuite Pro to identify them (in the instance that you're not trying to evade detection), or both?
For which vulnerability types (other than XSS) do you rely on an automated scanner to identify vulnerabilities rather than manual enumeration?
What does your typical working method look like on a web testing engagement? i.e. once you have done a crawl of the app how do you map out what functionality you are going to test in a way where you can give the app a thorough test?
So there's a difference between using materialist assumptions to inform your analysis (of the political-economy or the like), and actually asserting that there is nothing that exists beyond the material. Did Marx mean to assert the former, the latter, or both?