Can an algorithm tell when kids are in danger?
January 9, 2018 6:33 AM   Subscribe

 
“We know there are racially biased decisions made,” says Walter Smith Jr., a deputy director of C.Y.F., who is black. “There are all kinds of biases. If I’m a screener and I grew up in an alcoholic family, I might weigh a parent using alcohol more heavily. If I had a parent who was violent, I might care more about that. What predictive analytics provides is an opportunity to more uniformly and evenly look at all those variables.”

Well maybe. Or maybe it allows you to make your pre-existing biases 'objective' by wrapping them in mathematical abstractions and hard coding them.
posted by signal at 6:36 AM on January 9, 2018 [45 favorites]


Well maybe. Or maybe it allows you to make your pre-existing biases 'objective' by wrapping them in mathematical abstractions and hard coding them.

This feels like the self-driving car problem. Self-driving cars will inevitably screw up at some point and people will die. When that happens there will be all sorts of dire articles about how terrible self-driving cars are, and how we've given up control to machines who can't possibly understand the value of human life, etc... But the choice isn't self-driving cars or nothing, it's self-driving cars or people driving cars, and people are absolutely terrible at driving.

Same issue here. Will biases get encoded into the system? Sure. But it's not like they aren't already there in the existing system (and then some). Hopefully here there will at least be some sort of outcomes-based validation check on the system that will steer it towards more neutral outcomes. And if not, at least here some rigorous statistical analysis can point out the emperor doesn't have any clothes. At least here we should be able to see what the weights inside the decision-making process actually are. Is it possible that there will be some sort of self-reinforcing loop that entrenches existing stereotypes? Undoubtedly. But again it's not like that isn't already the case with the current system.
posted by leotrotsky at 6:59 AM on January 9, 2018 [32 favorites]


I appreciated that the article emphasized the need for transparency:

"...complaints have arisen about the secrecy surrounding the workings of the algorithms themselves — most of which are developed, marketed and closely guarded by private firms.
...
The Allegheny Family Screening Tool developed by Vaithianathan and Putnam-Hornstein is different: It is owned by the county. Its workings are public. Its criteria are described in academic publications and picked apart by local officials."

posted by Mr.Know-it-some at 7:06 AM on January 9, 2018 [27 favorites]


a lot of the details of this story reminds me of the ProPublica piece on how jails use algorithms as an 'objective' mask for keeping black and brown prisoners locked up because courts now believed their racist decisions were backed by the objective moral authority of 'science'

At least here we should be able to see what the weights inside the decision-making process actually are.

if the data is freely available for research and the grant money is there. otherwise this is just adding yet another institutional power dynamic to existing biases, that of 'big data'
posted by runt at 7:16 AM on January 9, 2018 [11 favorites]


"...complaints have arisen about the secrecy surrounding the workings of the algorithms themselves — most of which are developed, marketed and closely guarded by private firms.

My job is analytics and over the years I've become a fanatic about clear, mutually-agreed-upon definitions and transparency in calculations. You can have all the glossy dashboards you want to build but if you can't tell me what you're measuring and how in plain English then you've created a huge abstract art display and wasted my time.

If you can't tell someone what you're measuring, how, and why, there's a reason and it's either laziness or deception.
posted by winna at 7:25 AM on January 9, 2018 [49 favorites]


But the choice isn't self-driving cars or nothing, it's self-driving cars or people driving cars, and people are absolutely terrible at driving. Same issue here

Yes and no. The current options are "leave intervention decisions to individual interpretation, and take all the shitty human biases that come with that" or "use an algorithm to try to quantify things that are not inherently quantifiable." Someone's going to write that algorithm, and they're probably going to base it on case data... which was generated by CPS people acting on shitty human biases. I won't go so far as to say it's impossible, but I would want to make damned sure I was controlling for race and gender when I sic my neural network on the problem, and then I'm not purely "data-driven."
posted by Mayor West at 7:29 AM on January 9, 2018 [4 favorites]


The unfortunate/sad fact behind this move is that most CPS systems are criminally understaffed and overworked and simply have no other choice but to triage the calls and reports, complete with whatever baggage the workers carry with them to the job. In this "technology can fix everything" age of ours, using an algorithm-based system to do the task is unsurprising.
posted by Thorzdad at 7:31 AM on January 9, 2018 [17 favorites]


Thorzdad: The unfortunate/sad fact behind this move is that most CPS systems are criminally understaffed and overworked and simply have no other choice but to triage the calls and reports, complete with whatever baggage the workers carry with them to the job. In this "technology can fix everything" age of ours, using an algorithm-based system to do the task is unsurprising.

THIS. I'm not some Luddite opposed to technology and algorithms on principle, but I don't think algorithms can substitute for staffing and funding.

I read TFA and one of the things that struck me were the two kids who died in the house fire while their mom was out earning a living as a dancer. I thought, if Mom could just have gotten reliable child care for those kids a tragedy would have been avoided before CPS was even called. We talk a great game about Our Children Are Our Future And So Precious but we don't put our collective money where our mouths are, especially for children of color and low-income children. I think that's where many of the tragedies (children left alone, abusive stepparents/ partners killing kids, etc.) come from.
posted by Rosie M. Banks at 7:51 AM on January 9, 2018 [41 favorites]


> Rosie M. Banks:
"We talk a great game about Our Children Are Our Future And So Precious but we don't put our collective money where our mouths are, especially for children of color and low-income children."

This. It comes down to personal responsibility, which the right wing (in most places) favors, and ignores the fact the it's massively skewed against the poor, the female, the sick and those of less valued races, versus societal responsibility as the main factor.

It's the same with the right's answer to crime by being 'tough', it makes it seem as if crime is an individual's random decision rather than a predictable outcome of social conditions.

This is why I define myself as a left winger, not because I want to take away people's McMansions and eat their babies, but because of the fact the world is very fucking much not a level playing field, and it's incredibly cruel, wasteful and disingenuous to pretend it is.
posted by signal at 8:11 AM on January 9, 2018 [16 favorites]


Hopefully here there will at least be some sort of outcomes-based validation check on the system that will steer it towards more neutral outcomes. And if not, at least here some rigorous statistical analysis can point out the emperor doesn't have any clothes. At least here we should be able to see what the weights inside the decision-making process actually are.

I have to admit, I've grown a bit tired of having to explain over and over again to people that when you're talking about technological systems, often privately-developed, deployed against marginalized populations, the idea that competent "outcomes-based validation checks" can taken for granted, that someone will be out there running rigorous statistical analysis using the immense funding available for ensuring that poor or brown people get justice, and that the workings of algorithms will of course be and remain transparent (and that competent people will as a result be scrutinizing them closely) is pure wishful thinking. These are not speculative objections. If your level of thinking about using algorithms in this way is essentially "assuming a perfectly spherical cow...", you need to stop and do some reading.
posted by praemunire at 8:57 AM on January 9, 2018 [21 favorites]


But the choice isn't self-driving cars or nothing, it's self-driving cars or people driving cars, and people are absolutely terrible at driving.

Nope, the difference here is letting the human driver who's right there in the car do the driving, or let human beings automate the human decision making involved in driving as an abstract theoretical exercise ahead of time, still using human judgment and decision-making reasoning as expressed and automated through a human built and designed system for implementing human decisions automatically without human intervention. There is no magic intelligence in computers that makes them anything more than machines for automating human judgments based on our best guesses and analysis of how we ought to always decide things.
posted by saulgoodman at 9:29 AM on January 9, 2018 [5 favorites]


Everybody seems to want to see computers as a get-out-of-jail-free card for all the difficulties inherent in using sound judgment and making good choices but until/unless we develop some massively new kind of computing technology that can actually evaluate evidence independently and reason out judgments for itself, the best we can do with computers is automate what we already get right sometimes and mass produce/bulk apply our mistakes when we don't really know what we're doing.
posted by saulgoodman at 9:36 AM on January 9, 2018 [9 favorites]


Even the proponents of the system have doubts: “All of the data on which the algorithm is based is biased. Black children are, relatively speaking, over-surveilled in our systems, and white children are under-surveilled. Who we investigate is not a function of who abuses. It’s a function of who gets reported.”

Articles like this one focus on the high-pathos cases: preschoolers facing frightening levels of neglect and sometimes abuse, families with a history of crime and drug use, and the tragic cases where the agent decided "not worth investigating" and turned out to be wrong. They don't talk about poor families with a single mom with a good job, whose ex's relatives keep calling to report her with entirely made-up details - every call goes on her record, no matter how spurious; when a teacher reports an autistic child for "acting weird maybe there's trouble at home," that can trigger an investigation. This kind of article is always focused on "oh no; there was horrible abuse and we didn't stop it in time!," never on "oh no, there was a perfectly healthy, happy family and we dragged them through the criminal justice system and caused exactly the kind of emotional trauma we're supposed to prevent."

And of course, they don't talk about the white, upper-class professional families that get reported and ignored. I've known people who worked in social services who said, "we knew what was going on, but the father was a doctor; no way was that going to be investigated."

This particular article is just about deciding whether or not to investigate; it says nothing about the biases and judgment calls of the investigators: whether they believe that all poverty is signs of neglect; whether they think all non-Christian religions are potentially harmful for children; whether they think loud voices mean violence.

I'm aware that children's services are drastically understaffed and that they make a lot of judgment calls about emergencies, and I don't want them not to be investigating claims. But I want them held much more accountable than they are; I want them fired for proven bad judgment calls (in either direction); I want their decisions open to the public, barring the redaction needed to protect children's identities. I don't see how an algorithm helping with the decisions is going to increase scrutiny of the agencies.

I'm aware that we don't really know how AI works. Fine. Build AI to document the decisions of other AI algorithms: let them learn to provide explanations along with judgments.
posted by ErisLordFreedom at 9:46 AM on January 9, 2018 [9 favorites]


Maybe I'm missing something, but it sounds to me like the algorithm is being used as a tool to aid human workers in reaching a judgment rather than making the judgment for them. And the fact that it can search multiple data sources that it would be either impractical or impossible for the human workers on their own means that the worker has more information with which to make that judgment than they would otherwise.
posted by The Underpants Monster at 9:56 AM on January 9, 2018 [7 favorites]


Everybody seems to want to see computers as a get-out-of-jail-free card for all the difficulties inherent in using sound judgment and making good choices but

oddly, this gets me reflecting on so-called techno music in the early 1990s. It was a comparatively new thing then and, more to the point, the gear was getting very affordable so suddenly "everybody" was doing it, or certainly could (so much so that it was becoming cheaper to outfit yourself with a sampler etc, than worry about buying actual real world gear, not to mention keeping a band together). All pretty cool and it led to a real renaissance in machine driven musical options.

But meanwhile, within the scene that rose up around it, there were people (often rather loud voices) insisting that now pretty much ALL music must now be made with these machines, that it was somehow shamefully anachronistic to even think of being in an old-school band, or going to see one, or buying records being released by one. Now, it had to be ALL techno, ALL the time. Which was a dumb position to take, and wrong if you cared about music being free to be whatever music wanted/needed to be.

Anyway, it feels the same in this particular historical/cultural moment with regard to algorithms. Like a sampler or whatever back in 1991, they're a hell of a cool tool and helpful in all manner of ways, but for fuck's sake, can we please check the pathology that seems to want them to be the ONLY tool, that wants to chuck a million babies out with the old bathwater, because ... well, I've never really gotten clear on what the motivation is here, beyond humans being afraid of anything less than perfection.

Maybe I'm missing something, but it sounds to me like the algorithm is being used as a tool to Sid human workers in reaching a judgment rather than making the judgment for them.

good.
posted by philip-random at 9:58 AM on January 9, 2018 [2 favorites]


What are the criteria used for assessment? Is there a child abuse/neglect equivalent of the Lethality Assessment Program (example here) for domestic violence?
posted by nicebookrack at 10:07 AM on January 9, 2018 [2 favorites]


I was getting enraged until they mentioned the system is public and somewhat transparent. Doesn't mean its perfect but that's a start. And the fact its used as a decision aid is the right approach IMHO for those kind of things.

The article didn't specify if its a rule based system or just-a-pile-of-randomly-jittered-linear-algebra (machine learning). With a rule based system you can start figuring out why something happened or didn't happen, and you decide to add/remove rules. You still need to do this diligently to avoid introducing biases, but this is very driven by the expertise of the people writing the rules.

Machine learning while very adept at solving some problems isn't that transparent (you could make the training data transparent though), and may require more training data than is available, and will 100% reproduce whatever bias is present in the training data.
posted by WaterAndPixels at 10:08 AM on January 9, 2018


I'm aware that we don't really know how AI works.

Neural networks can often be inscrutable in why they produce certain decisions, but 'expert systems' like this are often either linear classifiers (which effectively assign point values to different kinds of evidence and then decide based on the total) or decision trees (which follow a flow chart to make their decisions), both of which can be relatively easily inspected to see why they made a specific decision.
posted by Pyry at 10:22 AM on January 9, 2018 [3 favorites]


I haven't had a chance to dig into it yet, but here is a PDF describing the algorithm. At a quick glance, I'm seeing phrases like "probit regression model", which suggests to me that it's using standard statistical approaches rather than machine learning approaches. It would be lovely to have a stats geek take a look through it.
posted by clawsoon at 10:24 AM on January 9, 2018


Statistical-based and ML-based algorithms optimize a metric. What's the right metric for algorithm performance here? Is it that different racial groups have investigations by CPS at equal level? Is it that different racial groups have investigations by CPS at an equal level, scaled for reporting differences between racial groups? Is it that the maximal number of kids are "rescued" from their parents? Is it the minimum number of false positive investigations by CPS? Is it some combination of the above?

If you can provide that metric, the algorithm will do what you want.

The problem isn't the algorithm, it's that we can't provide a metric we mutually agree on.
posted by saeculorum at 10:37 AM on January 9, 2018 [2 favorites]


That was super-interesting--thanks for sharing.

saeculorum--To me, this use of predictive analytics seems like a place where the metrics are unusually clear. At the most stringent level, how does the algorithm classify calls received about kids who are later injured or killed by abuse/neglect? It sounds like the agency in this article is using a less-stringent (but still useful) metric of counting as true positives those calls where a caseworker investigates and finds the family "needs services" (e.g., the call is substantiated and the kids are judged as being in need of help once a real human lays eyes on the situation), versus having the case closed because a face-to-face with a caseworker leads them to judge the kids are not at risk.

I thought it was interesting and a good sign that use of the algorithm has *reduced* the racial disparity in what percentage of calls are screened in for a caseworker to do an investigation, while also increasing the yield for those investigations (so caseworkers aren't wasting as much time chasing down families that were called in for frivolous or unsubstantiated reasons).

I dunno, I don't disagree with everyone's good points above about the way predictive analytics can simply reinforce biases while also obscuring that it's doing that--garbage in, garbage out. But it does seem to me that in cases where there are relatively clear-cut criteria for the outcome we're trying to predict, and the current system relies too much on the "gut feeling" or "expertise" of biased practitioners, those are the places where this sort of tool can do a good job at reducing the effects of unconscious/conscious bias. The very good NPR story a few weeks ago about the racial disparities in postpartum complications and maternal mortality presents another place where it seems like predictive analytics could be used to really improve upon a not-well-functioning system of human intuition that has shown itself to be very racially biased in practice.
posted by iminurmefi at 11:20 AM on January 9, 2018 [9 favorites]


As somebody with a lot of practical experience on this subject, absolutely algorithms can be useful tools for controlling and throttling the flow of information humans might use to make judgments, but it's a mistake to think machines can be held morally accountable when they screw up because somewhere in that chain, it was still one or more humans who screwed up and used bad judgment. People tend to *feel* less personally responsible and less like they are making choices when the process is somewhat automated for them though.
posted by saulgoodman at 11:30 AM on January 9, 2018 [2 favorites]


They don't talk about poor families with a single mom with a good job, whose ex's relatives keep calling to report her with entirely made-up details...

But I want them held much more accountable than they are; I want them fired for proven bad judgment calls (in either direction);


That exact thing happened to me for more than a decade, so let me say I understand the concern better than most here. I possess many thoughts, so let me summarize.

The opposite of good judgement is not bad judgment. It's zero tolerance. When I was a kid, and forgot my pocketknife in my pocket, the teacher took it and gave it back after class with some words about not doing that again. When my son did it, he got mandatory ISS and note in his file. Which was the "better" solution ?

What I'm saying is that if you're going to allow for judgment, API or otherwise, then you have to allow for mistakes. And since there is no easy heuristic in these cases, whatchagonna do ? Humans are judging other humans based on incomplete and inaccurate information. Mistakes are just going to happen.

It was a mistake for CPS to take my ex seriously the first time, and it was the 100th time. But - it could just as easily have been the opposite.

What isn't needed is more pressure and no tolerance for error. What is needed is a transparent process, a good appeals process, and willingness to accept, admit, and resolve error. We need to assume and accept that the entire process is imprecise and device methods to deal with the errors, because it is not possible to avoid the errors.

All the computer AI will do is make the errors standard across the organization. Now, instead of a "my gut says the kids should stay" we get a "the app says the kids should stay". It's magic in either case - the computer doesn't resolve the imprecision, it just removes the humans from blame.

Welcome to the 21st century : You're either above the API or below it.
posted by Pogo_Fuzzybutt at 11:39 AM on January 9, 2018 [12 favorites]


This is the most important part of the Alleghany project:
The Allegheny Family Screening Tool developed by Vaithianathan and Putnam-Hornstein is different: It is owned by the county. Its workings are public. Its criteria are described in academic publications and picked apart by local officials. At public meetings held in downtown Pittsburgh before the system’s adoption, lawyers, child advocates, parents and even former foster children asked hard questions not only of the academics but also of the county administrators who invited them.
Hopefully that process of community feedback is continuing to shape the project. That's how problems with algorithms get found and fixed.
posted by clawsoon at 12:16 PM on January 9, 2018 [4 favorites]


I read TFA and one of the things that struck me were the two kids who died in the house fire while their mom was out earning a living as a dancer. I thought, if Mom could just have gotten reliable child care for those kids a tragedy would have been avoided before CPS was even called.

Yeah, we could - instead of propping up an enormous racially-biased bureaucracy with more investment in surveiller et punir - provide childcare, housing, education and medical care directly to the people who need them. But then, of course, there would be no justification for constant surveillance of all these people and the myriad ways in which state funds are funneled to jails, staffing agencies, developers, and the owners of all those businesses.

My problem with a lot of this "let's automate it" stuff is that it assumes that poverty and misery are as natural as the laws of thermodynamics and what we need to do is manage the poor and miserable rather than change the conditions.
posted by Frowner at 12:35 PM on January 9, 2018 [13 favorites]


If this goes like "Swatting" goes, then what a horrible, horrible fuck up. Some complaining kid, well cared for, maybe a little dramatic, goes into foster care for what six months, a year, and no one believes him because algorithm knows best!
posted by Oyéah at 12:56 PM on January 9, 2018


I haven't had a chance to dig into it yet, but here is a PDF describing the algorithm. At a quick glance, I'm seeing phrases like "probit regression model", which suggests to me that it's using standard statistical approaches rather than machine learning approaches. It would be lovely to have a stats geek take a look through it.

Yeah, it's a regression model, or rather two of them -- one based on cases that were later referred again, and one on cases that resulted in foster care.

Some nice things:
They've done some external data validation, comparing the risk scores with children's hospital admissions -- children in the highest risk group were more likely to be admitted than children in the lowest risk group for all reasons, but it was by a modest 1.5-2.5x factor for things like sports injuries, transportation injuries, and falls -- however, the highest risk kids were 17 times more likely than the lowest risk kids to be admitted for physical assault, and 21 times more likely for self-inflicted injuries.

They looked at models with and without race included and found race to not help model performance substantially.

The model is used to synthesize a number of data sources and provide additional information, rather than being the sole arbiter; it is used to increase scrutiny on certain cases but does not automatically dismiss other cases.

Area of concern:
One of the key groups of model variables is poverty, which is measured by both the area they live in but also enrollment in various poverty-reduction programs, like SNAP, TANF, etc. Could this disincentivize the use of these programs - parents who use food stamps are more likely to be investigated by CPS?
posted by Homeboy Trouble at 1:12 PM on January 9, 2018 [10 favorites]


The woman asked repeatedly why she was being investigated, but agreed to a visit the next afternoon.

The home, Lankes found when she returned, had little furniture and no beds, though the 20-something mother insisted that she was in the process of securing those and that the children slept at relatives’ homes. All the appliances worked. There was food in the refrigerator. The mother’s disposition was hyper and erratic, but she insisted that she was clean of drugs and attending a treatment center. All three children denied having any worries about how their mother cared for them. Lankes would still need to confirm the mother’s story with her treatment center, but for the time being, it looked as though the algorithm had struck out.


You know what is so New York Times-y about this whole story?

1. The apparently pointless inclusion of the sex worker whose children died, first off - as far as I can tell, there weren't any real issues with her, since she was a "doting mother", except that she had to work, because back in 1996 the good liberals of America torpedoed the welfare system which would have enabled her to stay at home. Presumably the Times just included her because most NYT readers will think that being a sex worker is itself pathological instead of being a job with low barriers and it's an extra dose of satisfying sleaze for the readership.

2. The absolute disgusting disregard for the humanity of the article's subjects. Why would we be astonished that the mother in the quote above would want to know why she is being investigated? Why can't we conceptualize of a poor woman who can't afford a lot of furniture? Well, basically it's because the NYT reader is assumed to divide the world into "our kind" and "the poor". The poor shouldn't have normal human reactions like being afraid of the cops or of being investigated, because the virtuous poor person demonstrates virtue by immediate and cheerful submission to surveillance. Only a bad, rebellious poor person would be uncomfortable being surveilled and reduced to a number, whereas for "our kind" that kind of treatment would be a huge affront.

It's like, dude, these are people, not bundles of pathologies. Most of the whole problem with the system is the way it treats people like contaminated vessels rather than feeling humans - vessels which need a vigorous autoclaving administered by a technician before they can rejoin society.

Has anyone ever read GK Chesterton's detective story about the murder of a philanthropist, The Miracle of Moon Crescent? It's funny how antiquated Chesterton's argument seems, now that we've all accepted that the good and the great should use algorithms to judge and dispose of the lesser:

Fenner laughed and then looked puzzled. 'I don't understand one thing,' he said. 'If it was Wilson [who killed Wynd], how did Wynd come to have a man like that on such intimate terms? How did he come to be killed by a man he'd seen every day for years? He was famous as being a judge of men.'

Father Brown thumped his umbrella on the ground with an emphasis he rarely showed.

'Yes,' he said, almost fiercely; 'that was how he came to be killed. He was killed for just that. He was killed for being a judge of men.'

They all stared at him, but he went on, almost as if they were not there.

'What is any man that he should be a judge of men?' he demanded. 'These three were the tramps that once stood before him and were dismissed rapidly right and left to one place or another; as if for them there were no cloak of courtesy, no stages of intimacy, no free-will in friendship. And twenty years has not exhausted the indignation born of that unfathomable insult in that moment when he dared to know them at a glance.'


Of course, the idea that a poor person should be treated politely and with friendship rather than shoved about society by a computer is so totally foreign to us that it might as well be ancient Assyrian.
posted by Frowner at 1:42 PM on January 9, 2018 [13 favorites]


Maybe I'm missing something, but it sounds to me like the algorithm is being used as a tool to Sid human workers in reaching a judgment rather than making the judgment for them.

If liability is involved in making the decision, all of the responsibility will inevitably be placed on the non-human decisionmaker, which can't be fined or sentenced to prison. In the insurance and finance industries it's called things like "shirking" and "externalizing risk."
posted by rhizome at 1:54 PM on January 9, 2018 [3 favorites]


ErisLordFreedom: And of course, they don't talk about the white, upper-class professional families that get reported and ignored. I've known people who worked in social services who said, "we knew what was going on, but the father was a doctor; no way was that going to be investigated."

I think abuse, especially emotional abuse, is under-diagnosed and under-treated in "respectable" middle-and-upper-class white homes. The parents are Good People Who Are Doing Their Best, or, "well, kid, you drew the short straw for family, tough it out until you're 18."

We do have a problem with treating children as the property of their parents, but it's poor families, especially poor families of color, who get scrutinized and pathologized for this. We need a whole overhaul of how we treat children and how we spend money on them.

Seema Jilani, Guardian: America's child abuse epidemic
A new BBC documentary has investigated why the US, one of the most prosperous nations on earth, has the worst child abuse record in the industrialised world. America's child maltreatment death rate is triple Canada's and 11 times that of Italy. Over the past decade, more than 20,000 American children have been killed their own family members – that is nearly four times the number of US soldiers killed in Iraq and Afghanistan.
posted by Rosie M. Banks at 1:56 PM on January 9, 2018 [9 favorites]


I have lots of thoughts on this article . IL doesn't user this system (anymore? I guess). I work with DCFS a good bit and it is hard. It's hard parents, it is hard on kids, it is hard on DCFS.

There is so much to balance.

SES is absolutely never a reason to report a family.

Because in IL DCFS and most of the child parent programs are run by HFS( parent department) sometimes a way to solve socioeconomic issues (child care access for example) is to escalate it to DCFS especially if it will prevent child neglect and such.

It becomes complicated in that way. Why do we require things to be so bad to help out families? I shouldn't need to refer to DCFS to get someone resources. But sometimes that's exactly what happens.

A decision to investigate in IL is made at the end of the screening phone call. So it's a really fast decision based on the verbal report. It surprises me sometime how fast.

I don't look as badly on investigations as I use to, but it may be because I see enough to see how routine they are, and how many actually get closed. It is so stressful on the families for sure, but at least the cases I work with have been pretty honest, fair and child removal is only used in the most obvious of cases .

I get the fears and the nightmare of the good parent stuck in the crazy scary DCFS maze. I also get the nightmare of the missed child.

Risks are risks, not truths.
posted by AlexiaSky at 5:10 PM on January 9, 2018 [2 favorites]


This is tangential to the research field I work in and I saw the researchers who developed this algorithm present their work at a conference several years back. They are extremely talented folks with PhDs in child development who were very mindful of the problem of bias in existing child protection decision systems. One of their primary goals for this system was to reduce bias, although they were quite aware that it is impossible to get rid of it completely.

I think one thing people misunderstand about child protection, something that is not well conveyed in this article, is that this isn’t a binary system where the only option is investigate/don’t investigate. There are multiple other pathways that are between those two options. Some families are investigated but then assigned a social worker. Some are not directly investigated but go straight to a social worker. Other families are directed to support resources. The algorithm also provides child protection workers with information to support these kinds of decisions.
posted by scantee at 5:45 PM on January 9, 2018 [6 favorites]


Sidebar/tangent: this article has more information about Kiaira Pollard, the dancer mentioned in the OP whose sons died in a fire. In 2012 Pollard pleaded guilty to two counts of involuntary manslaughter.

Also kinda disturbingly, while googling Pollard I noticed that for some reason(?!) the Pennsylvania family services report on the fatality of one of Pollard’s sons is available online.
posted by nicebookrack at 7:25 PM on January 10, 2018


at least the cases I work with have been pretty honest, fair and child removal is only used in the most obvious of cases .

The investigator that my aunt sent to our house--who was wearing a cross--didn't talk about child removal until she saw the pentagrams. Once she recognized our religious materials, we weren't an "unusual" home anymore (warehouse loft; room spaces are weird) but a "dangerous" one. The kids still have nightmares about the week it took us to get them back.

When we were done with it, the lawyer on the other side apologized to us - the investigator's bias was clear in her report, but it took several days to get all the relevant people in the courtroom at the same time to get a ruling.

I decided I could wait until they were 18 to look for a therapist for me, because I knew that any offhand remark that the therapist decided was "danger to children" was grounds for another investigation. It was less terrifying to hold off on mental health treatment for a decade than to run the risk of going through that again.

Child removal is used in cases where the investigator thinks it's warranted. Most of them have good judgment about this. Some are have terrible biases or "instant removal" triggers.

I'm glad to see them using more objective information to decide what to investigate, but the whole damn system is rotten; there isn't any amount of patching that's going to make it "good" instead of "sometimes less harmful."
posted by ErisLordFreedom at 3:50 PM on January 11, 2018 [2 favorites]


« Older Don Your Tinfoil Hats   |   Paint By Monster! Newer »


This thread has been archived and is closed to new comments