Thomas<p>As a reminder: don't let LLMs handle anything in the political sphere unless you have RLHF (Reinforcement Learning from Human Feedback) active before you show the result to anyone*. Also think of automation risks and human factors (HF). That's "Good Old Systems Safety".</p><p>*) ... or unless your goal is to damage a 3rd party's reputation (fake news style).</p><p><a href="https://mas.to/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mas.to/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mas.to/tags/rlhf" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rlhf</span></a> <a href="https://mas.to/tags/automationrisks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>automationrisks</span></a> <a href="https://mas.to/tags/SystemsSafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystemsSafety</span></a> </p><p><a href="https://www.theregister.com/2024/12/20/apple_ai_headline_summaries/?td=rt-3a" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2024/12/20/app</span><span class="invisible">le_ai_headline_summaries/?td=rt-3a</span></a></p>