Why goodreads
Goodreads is free to use, and you can access your account via the Goodreads website or the iOS or Android app. And thanks to the Amazon connection, Kindle users can rate books and add them to their Goodreads shelf straight from their device.
Once you sign up for a Goodreads account, you'll be prompted to review books you've already read and add them to your bookshelf so that Goodreads can provide curated recommendations of what to read next. To add a book to your "read," "currently reading," or "to-read" bookshelf, enter the name of the title in the search bar at the top of the page and find the dedicated page for the book.
Underneath the cover image, click the arrow to open a drop-down menu and select which bookshelf to add it to. To access your bookshelves, click "My Books" in the menu bar at the top of the screen. Here, you can add or change your star rating, write a review, and even add the date you read it. Each book on Goodreads has a dedicated page that shows the overall rating users have given a book, a summary, author information, and reviews from the Goodreads community.
Here, you can ask a question, write your review, and like or comment on other reviews. If you're looking for books to read, you can click "Recommendations" on the home page to get curated recommendations from Goodreads based on books you've previously read and rated.
On book pages, there is a "readers also enjoyed" section of similar books. And in the Browse tab, you can search by genre, see a list of new releases, and browse lists voted on by Goodreads users. Those looking to get more involved in the Goodreads community can head to the community tab to join like-minded groups or chime in on a book discussion. You can use Goodreads to keep track of what you're reading, but there are so many additional features that you can enjoy as well.
You can add friends to keep up with what they're reading and rating, join book clubs, enter giveaways to be the first to read newly released books, ask authors questions, and more. Goodreads even offers users a selection of entire ebooks and ebook excerpts for free. However, the problems have continued. When review bombing campaigns take off on Goodreads, Stein says that authors of color are often the target.
Chupeco says that many marginalized writers they know have had this happen to their books. And the threat of further harassment often discourages authors from speaking out. With the help of SFWA , he says he was eventually able to get Goodreads to take action—but it took a significant amount of time and pressure.
They had people impersonating other authors and board members of SFWA, trying to make it look like people of great importance and respect within my community were openly trashing me and my work.
And since he first spoke with TIME, Tomlinson says his books have come under yet another review bombing attack, complete with negative ratings, reviews and comments. These systems seem to make it much more difficult for scammers to create and use multiple fake accounts in order to review bomb specific products or sellers. Write to Megan McCluskey at megan. By Megan McCluskey. As behoves his background in AI and Cognitive Science, the book is also rich with speculation about intelligence and consciousness human and artificial.
I found all this entertaining and thought-provoking. He contrasts his approach to the Big Data approach, but also proposes a marriage between the approaches. The DAGitty web package is also excellent. OR, cross over to the dark side and look into econometrics. In presenting his history of causality, he emphasises his own contribution, and de-emphasises the contribution of others.
This is true. A few other reviews note that causality in social sciences is maybe not the new thing that Pearl argues it is. Pearl reviews examples where we would accept causality has been proven without his formal causal logic e. What he communicates well is that, without a clear causal logic, it quickly got messy. Don Rubin the king of missing data and potential outcomes , disagrees on the value of causal diagrams. The latter camp are arguing for a more pluralistic approach, and point to some instances where using DAGs and DAGitty has produced some implausible models.
This intrigues me. Dec 04, Kelly Jade rated it it was ok. The book would have been pages shorter if the author spent less time name dropping and talking himself up. We get it. Everyone who opposes you is wrong and stupid and you're the greatest and smartest, just look at all your students with all these high level faculty positions.
Interesting ideas but a lot of ideas could have been explained more clearly or completely if the author laid off the commentary and ego stroking.
Jul 17, Peter McCluskey rated it really liked it. This book aims to turn the ideas from Pearl's seminal Causality into something that's readable by a fairly wide audience. It is somewhat successful. Most of the book is pretty readable, but parts of it still read like they were written for mathematicians.
History of science A fair amount of the book covers the era most of the 20th century when statisticians and scientists mostly rejected causality as an appropriate subject for science. They mostly observed correlations, and carefully repeated the This book aims to turn the ideas from Pearl's seminal Causality into something that's readable by a fairly wide audience.
They mostly observed correlations, and carefully repeated the mantra "correlation does not imply causation". Scientists kept wanting to at least hint at causal implications of their research, but statisticians rejected most attempts to make rigorous claims about causes. The one exception was for randomized controlled trials RCTs. Statisticians figured out early on that a good RCT can demonstrate that correlation does imply causation. So RCTs became increasingly important over much of the 20th century [1].
That created a weird tension, where the use of RCTs made it clear that scientists valued the concept of causality, but in most other contexts they tried to talk as if causality wasn't real.
Not quite as definitely unreal as phlogiston. A bit closer to how behaviorists often tabooed the ideas that we had internal experiences and consciousness, or how linguists once banned debates on the origin of language , namely, that it was dangerous to think science could touch those topics.
Or maybe a bit like heaven and hell - concepts which, even if they are useful, seem to be forever beyond the reach of science? But scientists kept wanting to influence the world, rather than just predict it. So they often got impatient, when they couldn't afford to wait for RCTs, to act as if correlations told them something about causation.
The most conspicuous example is smoking. Scientists saw many hints that smoking caused cancer, but without an RCT [2] , their standards and vocabulary made it hard to say more than that smoking is associated with cancer. This eventually prompted experts to articulate criteria that seemed somewhat useful at establishing causality. But even in ideal circumstances, those criteria weren't convincing enough to produce a consensus. Authoritative claims about smoking and cancer were delayed for years by scientists' discomfort with talking about causality [3].
It took Pearl to describe how to formulate a unambiguous set of causal claims, and then say rigorous things about whether the evidence confirms or discredits the claims. What went wrong? The book presents some good hints about why the concept of causality was tabooed from science for much of the 20th century.
It focuses on the role of R. Fisher also known as one of the main advocates of frequentism. Fisher was a zealot whose prestige was somewhat heavily based on his skill at quantifying uncertainty. In contrast, he didn't manage to quantify causality, or even figure out how to talk clearly about it.
Pearl hints that this biased him against causal reasoning. Statistics, as frequently practiced, discourages it, and encouraged "canned" procedures instead. But blaming a few influential people seems to merely describe the tip of the iceberg. Why did scientists as a group follow Fisher's lead? I suggest that the iceberg is better explained by what James C. Scott describes as high modernism and the desire for legibility.
I see a similar same pattern in the 20th century dominance of frequentism in most fields of science and the rejection of Bayesian approaches. Anything that required priors whose source often couldn't be rigorously measured was at odds with the goal of legibility. The rise and fall of the taboo on causal inference coincide moderately well with the rise and fall of Soviet-style central planning, planned cities , and Taylorist factory management.
I also see some overlap with behaviorism , with its attempt to deny the importance of variables that were hard to measure, and its utopian hopes for how much its techniques could accomplish. These patterns all seem to all be rooted in overconfident extrapolations of simple models of what caused progress. I don't think it's an accident that they all peaked near the middle of the 20th century, and were mostly discredited by the end of the century.
I remember that when I was young, I supported the standard inferences from the "correlation does not imply causation" mantra, and was briefly and less clearly tempted by the other manifestations of high modernism. Alas, I don't remember my reasons for doing so well enough to be of much use, other than a semi-appropriate respect for the authorities who were promoting those ideas.
An example of why causal reasoning matters Here's an example that the book provides, dealing with non-randomized studies of a fictitious drug to illustrate Simpson's Paradox , but also to show the difference between statistics and causal inference. The examples come with identical numbers, so that a statistics program which only sees correlations, and can't understand the causal arrows I've drawn here, would analyze both studies using the same methods.
The numbers in these studies are chosen so that the aggregate data suggest an opposite conclusion about the drug from what we see if we stratify by gender or blood pressure. Standard statistics won't tell us which way of looking at data is more informative.
But if we apply a little extra knowledge, it becomes clear that gender was a confounding variable that should be controlled for it influenced who decided to take the drug , whereas blood pressure was a mediator that tells us how the drug works, and shouldn't be controlled for. People typically don't find it hard to distinguish between the hypothesis that a drug caused a change in blood pressure and the hypothesis that a drug changed patients' reported gender.
We all have a sufficiently sophisticated model of the world to assume the drug isn't changing patients' gender identity i. Yet canned programs today are not designed to handle that, and it will be hard to fix programs so that they have the common sense needed to make those distinctions over a wide variety of domains.
Continuing Problems? Pearl complains about scientists controlling for too many variables. The example described above helps explain why controlling for variables is often harmful, when it's not informed by a decent causal model. I have been mildly suspicious of the controlling for more variables is better attitude in the past, but this book clarified the problems well enough that I should be able to distinguish sensible from foolish attempts at controlling for variables.
Controlling for confounders seems like an area where science still has a long way to go before it can live up to Pearl's ideals. There's also some lingering high modernism affecting the status of RCTs relative to other ways of inferring causality.
A sufficiently well-run RCT can at least create the appearance that everything important has been quantified. Sampling errors can be reliably quantified. Then the experimenter can sweep any systemic bias under the rug, and declare that the hypothesis formation step lies outside of science, or maybe deny that hypotheses matter maybe they're just looking through all the evidence to see what pops out. It looks to me like the peer review process still focuses too heavily on the easy-to-quantify and easy-to-verify steps in the scientific process i.
When RCTs aren't done, researchers too often focus on risk factors and associations, to equivocate about whether the research enlightens us about causality.
AI The book points out that an AI will need to reason causally in order to reach human-level intelligence. It seems like that ought to be uncontroversial. I'm unsure whether it actually is uncontroversial. But Pearl goes further, saying that the lack of causal reasoning in AIs has been "perhaps the biggest roadblock" to human-level intelligence. I find that somewhat implausible. My intuition is that general-purpose causal inference won't be valuable in AIs until those AIs have world-models which are at least as sophisticated as crows [4] , and that when that level is reached, we'll get rapid progress at incorporating causal inference into AI.
High modernist attitudes may well have hurt AI research in the past, and that may still be slowing AI research a bit. But Pearl exaggerates these effects. To the extent that Pearl identifies tasks that AI can't yet tackle e. I expect that mainstream machine learning is mostly on track to handle that variety of concepts any decade now. I expect that until then, AI will only be able to do causal reasoning on toy problems, regardless of how well it understands causality.
Conclusion Pearl is great at evaluating what constitutes clear thinking about causality. He's somewhat good at teaching us how to think clearly about novel causal problems, and rather unremarkable when he ventures outside the realm of causal inference.
Footnotes [1] - RCTs and Fisher's influences in general don't seem to be popular in physics or geology. I'm curious why Pearl doesn't find this worth noting. I've mentioned before that people seem to care about p-values being less than 0. Clarke's first law applies here: it looks like about 8 studies had some sort of randomized interventions which altered smoking rates, including two studies focused solely on smoking interventions, which generated important reductions in smoking in the control group.
The RCTs seem to confirm that smoking causes health problems such as lung cancer and cardiovascular disease, but suggest that smoking shortens lifespan by a good deal less than the correlations would indicate. Those sources of uncertainty have been obscured by the people who signal support for the "smoking is evil" view, and by smokers and tobacco companies who cling to delusions. The book notes that there is a "smoking gene" rs, aka Mr Big , but mostly it just means that smoking causes more harm for people with that gene.
Pearl thinks quite rigorously when he's focused exclusively on causal inference, but outside that domain of expertise, he comes across as no more careful than an average scientist.
I find Wikipedia's description of non-human causal reasoning to be more credible. Aug 29, Juan rated it really liked it Shelves: non-fiction. This was a borrowed book, the kind of books for which I have the utmost respect.
Meaning, no reading it in the beach or anywhere close wet things. Which is why it took longer than expected, although finally I had to disrespect it just a little tiny bit since I had a deadline to return it. Do borrowed books take less to be read than any other kind of books? We might find a correlation between these two variables. Correlation is not causality, however and I'm not entirely sure there's such correla This was a borrowed book, the kind of books for which I have the utmost respect.
Correlation is not causality, however and I'm not entirely sure there's such correlation, either. And until a relatively short time ago, there were no good tools that one of the shortening the acquisition-to-finished reading time was the fact that it was borrowed from someone else, from a library. In this book, Judea Pearl and coauthor talk about how statistics and mathematics in general evolved from that only-correlation phase, to a phase in which it's possible, through graphical tools, to examine causality in a principled way, and also how to challenge the assumption of correlation equal to causation.
Might find them or not, but if found there will be some tools that will tell you how different factors influence outcomes. Along the line, as is usual in divulgative books, anecdotes of the curious characters that populated mathematics during the last century are found out; we also discover the intra-history of how tobacco was actually discovered to cause cancer, finding along the way that causation is no laughing matter, and might lead literally to life-or-dead decisions.
All in all, an interesting read, even if you are not interested in the history of science, but just want to have a few tools to analyze current news. This was a long, strange trip through the statistical analysis of causation. Judea Pearl writes beautifully and in an almost grandiose manner, dubbing himself a Whig historian of the science of causation--how it was forgotten by statistical analysis that put correlation at the pinnacle of analysis, how it was rediscovered later, and in particular the importance of structural models that combine an understanding of the world with the data--but do not just let the data speak for itself.
The book c This was a long, strange trip through the statistical analysis of causation. The book combines a history of science with a number of specific examples e. But mostly Pearl's method centers around writing causal diagrams with arrows that allow you to identify blockers, cofounders, and the like.
The arrows and terminology was not familiar to me from econometrics but many of the conclusions and techniques were e. In some cases Pearl claimed a greater profundity than I was able to follow, for example I could understand the Bayesian interpretation of his argument but he claimed there was a bigger one.
In other cases he claimed that his diagrams opened up entirely new paths to solving causal questions and understanding the results of statistical analysis. In all of these cases I confess that I mapped them into my previous understanding rather than expanding, changing, evolving my previous understanding--and am unsure if this represents my limited understanding of his book or his overclaiming about his ideas, many of which were well understood and implemented in econometrics before.
The last chapter on AI, free will, explicability, and correlation vs. Overall, would recommend this to economists or others who are very interested in statistical analysis, it takes some effort at times nothing like a textbook, which would be the best way to assess the novelty of some of the ideas , but amply rewards it.
Aug 09, Siddhartha Banerjee rated it it was amazing. Every now and then you read a book that introduces you to a new concept and forces you to reevaluate your world view, leaving you better for it. For me, this was one such book. Highly, highly recommend. I wish I had known this stuff when I was doing my PhD. View 1 comment. Nov 05, Karel Baloun rated it it was amazing. Valuable for your permanent for ongoing reference and inspirational revisiting, with an absolutely ideal annotated bibliography.
Artisan crafting to certainly withstand the test of time. Invest days in simplifying and repairing how you think causally!
Fun and readable, and so practically valuable. Knowledge is in the model, not just waiting to emerge from the data. On pg , Pearl elegantly distinguishes two models of how talent and luck generate success. If luck applies independently to each degeneration, the model is mathematically stable, but it it accrues with talent over generations you get a wide persistent distribution of outcomes. For me, this profoundly simplified economic inequality, and shows how useful is a rigorous framework for thinking.
Yet with either one, talent is passed down generations, aiding success under equal opportunity. Appropriately, in the engaging historical chapter 2, Pearl often asks Why historical persons thought as they did, whenever he is able to answer himself.
And it is consistent to see Fisher return in his evil cantankerous role during the tobacco trials and as a professor of Eugenics. Pages provide the most simple and lucid explanation of estimating the likelihood of having a disease from a positive test result for it. So useful. Fun seeing how fuzzy math and Bayesian networks, cutting edge research when I was in grad school, have evolved into the mainstream.
I can almost imagine my alternative life, had I studied this and developed ways to use it. Partly because back then we had no big data. The paradox chapter is fascinating, and especially meaningful because it anchors the earlier theoretical ideas into memory.
The closing chapters are difficult, in terms of figuring out how to apply this sparkling and sharp tools to your own life and work. Well, I suppose that should be a challenge. Final chapter on Ai is only a start, and leaves little on which to build. His assertion, that free will is superior in performance to borg like behavior from simulations like generative adversarial networks, feels unproven.
Jan 24, Alex Lee rated it it was amazing Shelves: economics , science , info-tech , , mathematics , philosophy , impressive. This is amazing. Essentially Pearl and Mackenzie provide a manner to assess causation through data alone. The key is to provide a model for causation to test the data against. Much of the stats goes over my head, but intuitively we understand how to test for causation; how to get at what matters, what doesn't, what kind of matters and under what conditions we should experiment.
But then again, we don't. Often we control for too much, indirectly influencing our experiments. What we have here is a f This is amazing. What we have here is a frontier of twisting around our thinking of causation.
Often we also think that causation should be expressed in terms of direct causes, but this is too simple. Only in this make-believe-sense can we get at what really matters, because reality can have multiple causes all which have different weights and forms of interference. Feb 03, Bilge rated it it was ok Shelves: cs , popular-science , data. I love such books, and I wanted to love this one as well.
However, this was not for my taste. I assume I wanted to see more reference to AI than causation theory. And I believe the authors extended every chapter more than necessary. Therefore, after a point, I got bored. Mar 07, Tam rated it liked it Shelves: non-fic. I don't think the book is particularly well written, but its contents make up for the writing, if you have the patience and the motivation.
Towards the second half of the book, some parts are personally illuminating for me, when I finally get used to the causal graphs. The language of statistics and structural equations that I was trained in does seem inadequate. I agree that graphs straighten all the assumptions and then people can have a good discussion about causation, the holy goal among all I don't think the book is particularly well written, but its contents make up for the writing, if you have the patience and the motivation.
I agree that graphs straighten all the assumptions and then people can have a good discussion about causation, the holy goal among all yet so elusive.
For example, I was in the camp of Rubins where I was stuck in the thinking mode that there is no way to design an RCT for mediation analysis, so the whole endeavor is hopeless. Judea Pearl certainly shows me the merit of other approaches and convinces me that it's the path worth exploring.
I will definitely start to do so. Yet I wonder how long will it take for Economics as a field to start to seriously adopt these techniques. Sep 11, Marcel Santos added it Shelves: partially-read. I came across this book after starting to listen to some professionals of evidence-based medicine, which is a fascinating field using advanced scientific methods.
Concepts such as Bayesian analysis, among others, represent a challenge to someone like me who deals with areas of knowledge mostly Law and Economics which are still far from using them. Another motivation was my experience in noticing that scientific methods, language and concepts born in one field have been increasingly borrowed by others, signaling possible ambitious unifications. However, I must acknowledge that this is too deep a trip into advanced notions of statistics and mathematics.
I have no doubt that this is a master piece on an absolutely relevant issue - it seems even like a solid framework for technology evolution. Unfortunately it is out of reach to those not into Exact Sciences. It would be unfair if I rated it. Never mind. I may come back to it in the future if my studies drive me again to the issue. Jan 12, Vicki rated it it was ok Shelves: borrowed-book. The examples were good, but for the rest of it the writing was muddled.
Plus, I chose to listen to this as an audio book which was a huge mistake because you can't see any of the diagrams and this book seems to rely on them. I think his theory is interesting, but I wouldn't recommend this book. Apr 04, Irma Ravkic rated it it was amazing. The book examines the notion of causality the question of why something happened and why it's still something we don't excel in both in our lives and science.
Judea Pearl is a well-known computer scientist who invented Bayesian networks that are not necessarily causal , and in this book, he discusses that in order to have strong AI many people believe we already have it , we need our AI machines to have some notion of causality and being able to deal with counterfactuals I have done X and I The book examines the notion of causality the question of why something happened and why it's still something we don't excel in both in our lives and science.
Judea Pearl is a well-known computer scientist who invented Bayesian networks that are not necessarily causal , and in this book, he discusses that in order to have strong AI many people believe we already have it , we need our AI machines to have some notion of causality and being able to deal with counterfactuals I have done X and I got some outcome Y, but what would have happened if I haven't done X'?
Pearl claims that causal questions cannot be answered from data alone, and that is a very interesting claim in the world of deep learning and big data. Most of the machine learning models we have today merely describe or transform the data - but they're not very good at interpreting the data in terms of why something happened because they don't incorporate causal mechanisms.
That means that most of the time in machine learning we talk about patterns and associations, and the next step is to think about causation and counterfactuals. Pearl doesn't dismiss completely the notion of finding an association - it's maybe a start into examining more deeply the reason why those associations exist. One of the things we want of our intelligent agents is for them to reason in long chains of causality.
For example Pearl , if our AI robot starts vacuuming at night and wakes us up, we say: "You shouldn't have vacuumed now" counterfactual. The vacuum robots should realize that it doesn't mean we never want it to vacuum clean our bedroom, but it should understand that vacuum cleaning late at night when people sleep might make someone angry, but it's fine to vacuum clean late at night when nobody's at home.
We're not even close to this in current AI, but many people think we are. Some concepts in the book are too technical to grasp in an easy read and hard to follow without maybe seeing the research papers and full work of Judea Pearl, but the book really tries to give intuitions and real-world examples behind these complex mechanisms of causality modeling and inference.
That being said, the book contains a lot of stories from our history that examine the mis use of causality. I might go back and read one more time to settle it in. In a more perhaps exaggerated fashion, I see this book and work of Pearl as a 'holy book' of AI we should always refer back to from time to time, amongst others.
May 22, Julia rated it really liked it. It took me an incredibly long time to finish this book, but in the end, after a global pandemic caused me to return to my half-finished book pile, I did really like it. Maybe more like 3. What I like the most about this is the clear-minded ideas on formalizing assumptions about a causal model before fitting to data using a graph, in Pearl's view and then testing those assumptions.
In hindsight, this approach with the graphs would have improved some of the work I've do It took me an incredibly long time to finish this book, but in the end, after a global pandemic caused me to return to my half-finished book pile, I did really like it. In hindsight, this approach with the graphs would have improved some of the work I've done in the past. I know this book is framed as for a general audience, but IMO it would take a very motivated general reader to absorb this.
Jun 27, Richard Thompson rated it really liked it Shelves: philosophy. My mistake with this book was to listen to it as an audio book. Smarty Pants thought he could absorb it all in his car. I certainly got the general drift and understood the concepts behind the back door method, the front door method and dealing with mediators, but a lot of the richness of the illustrations was lost on me without the ability to work directly with the causal diagrams that are c My mistake with this book was to listen to it as an audio book.
I certainly got the general drift and understood the concepts behind the back door method, the front door method and dealing with mediators, but a lot of the richness of the illustrations was lost on me without the ability to work directly with the causal diagrams that are critical part of the book's theory.
I strongly recommend that others considering this book should go for the print edition. Many of the methods and theories of this book seem like simple common sense, which is understandable since the human brain naturally looks for causation everywhere and is frequently fooled into finding patterns and causation where none exist.
But as Pearl shows, the results are often surprising, yielding results that are way beyond common sense when his methods are rigorously applied. And some problems that seem horribly complex or even unsolvable become very simple with Pearl's methods.
The methods of causal analysis that Pearl describes are obviously useful for scientific experiments and for data analysis in the social sciences, but I think that they also have some value for problems that come up in business and in daily life, so I am inspired by this book to look for problems in my life and work where drawing a causal diagram and finding ways to block pathways from confounders can be applied. There are no discussion topics on this book yet. Be the first to start one ». Readers also enjoyed.
Goodreads is hiring! If you like books and love to build cool products, we may be looking for you. Learn more ». Artificial Intelligence. About Judea Pearl. Judea Pearl. Judea Pearl is an Israeli-American computer scientist and philosopher, best known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks. Books by Judea Pearl.
Related Articles. Bossy, bumbly, and Betty! Humor books from until today have unpacked experiences from everyday hilarity to the outright uncomfortable Read more Trivia About The Book of Why No trivia or quizzes yet.
Add some now ». Quotes from The Book of Why
0コメント