>>24052i imagine that the greatest challenge comes from the sneaky-sneaky glowies digging years old rabbit holes to push backdoors into code. due to the open source nature of xz, malicious code must be very well hidden, difficult for the ai to detect, see
>>23973 for a tl;dr of the very lengthy process. plus, its not like you can just grep the source code and find a boolean response to some given string whether its malicious or not. although i imagine these aren't huge limitations, as the plot was foiled pretty easily by, of all people, a Microsoft dev and I would imagine after this fiasco more effort will be concentrating on ravaging through rabbit-holes for malicious code.