#46
|
|||
|
|||
Quote:
Last edited by marciero; 10-11-2024 at 05:07 AM. |
#47
|
|||
|
|||
I recently went through a nearly year-long process of getting (liquidated per court order) assets through a probate distribution, and the bank's personnel made several references/excuses how they had to work around their computer's algorithms (which seemed to be biased toward retention of liquid assets).
I imagine that with AI programming accelerating the evolution of the algorithms that satisfy their Wall Street shareholders, that an executor would likely die of old age before seeing any of the assets ever distributed. I mentioned this to them several times to humor myself. |
#48
|
|||
|
|||
Quote:
I'm skeptical of an AI algorithm here as it is probably just a simple allocation algorithm with liquidation preference. It should be logical, and you should ask for a written explanation of the decisions made- why you sold this, kept this. etc. It's always easier to distribute cash and other liquid assets to beneficiaries as opposed to illiquid assets (i.e. real estate). If they explain this, it should make sense, if not, you have more questions to ask. Last edited by verticaldoug; 10-13-2024 at 02:31 AM. |
#49
|
|||
|
|||
Quote:
And they may have just been sharing their humor about their employer wanting to keep all the money. Last edited by HenryA; 10-12-2024 at 09:01 PM. |
#50
|
|||
|
|||
Quote:
Regarding explainability, Dont some European countries have laws about explainability in situations like this- for example, explaining why you were turned down for a loan? I thought the US was considering this too. Explainability can be tricky. ML models range from the completely transparent/explainable to the more black box models. Some of the less transparent models have variable importance measures. But the thing is, you can have two different models that make the same or similar predictions but for different reasons. |
|
|