This really only presents the hypothesis that artificial intelligence (AI) will significantly impact the final stages of various processes, often referred to as the "last mile." However, she does not provide substantial evidence or detailed arguments to support this claim. The article lacks specific examples, data, or references that would substantiate her position, making it more of an opinion piece than a thoroughly backed analysis.
Don't get me to be wrong I would love this to be true.
Context is a challenge for LLMs, but the challenge feels of a different quality to me, than the challenge of incorporating local context into automated decision-making AI like algorithmic hiring, banking decisions, and real estate valuation like Zillow. These examples are more like "pre-LLM" machine learning, and it's not clear to me that LLMs are inherently limited in the same way. If anything, I think there's potential for LLMs to more flexibly handle a much broader variety of local contextual information by ingesting natural language rather than non-LLM machine learning systems where how to featurize or represent this information is typically quite bespoke. Take the neighbors' practicing death metal in their garage every Sunday and its impact on house valuation - it feels harder to get a non-LLM ML system to "understand" this, as a very sparse "feature", than an LLM.
Was thinking about this today in context of hiring.
We have these amazing LLMs that are continually improving. Yet if you say to them, here’s my business, now takeover the marketing department. You will end up with so much output that’s not localized that the value of the whole output is worth very little. Yet when you have a highly experienced localized marketing leader use the LLM to speed up work the whole output is very valuable.
I don’t think this problem is solved by solely defining preferences better. It’s clear a human adaptation layer beyond solely RLHF is needed for at least the short term.
This really only presents the hypothesis that artificial intelligence (AI) will significantly impact the final stages of various processes, often referred to as the "last mile." However, she does not provide substantial evidence or detailed arguments to support this claim. The article lacks specific examples, data, or references that would substantiate her position, making it more of an opinion piece than a thoroughly backed analysis.
Don't get me to be wrong I would love this to be true.
Context is a challenge for LLMs, but the challenge feels of a different quality to me, than the challenge of incorporating local context into automated decision-making AI like algorithmic hiring, banking decisions, and real estate valuation like Zillow. These examples are more like "pre-LLM" machine learning, and it's not clear to me that LLMs are inherently limited in the same way. If anything, I think there's potential for LLMs to more flexibly handle a much broader variety of local contextual information by ingesting natural language rather than non-LLM machine learning systems where how to featurize or represent this information is typically quite bespoke. Take the neighbors' practicing death metal in their garage every Sunday and its impact on house valuation - it feels harder to get a non-LLM ML system to "understand" this, as a very sparse "feature", than an LLM.
Was thinking about this today in context of hiring.
We have these amazing LLMs that are continually improving. Yet if you say to them, here’s my business, now takeover the marketing department. You will end up with so much output that’s not localized that the value of the whole output is worth very little. Yet when you have a highly experienced localized marketing leader use the LLM to speed up work the whole output is very valuable.
I don’t think this problem is solved by solely defining preferences better. It’s clear a human adaptation layer beyond solely RLHF is needed for at least the short term.