image transcript (via tesseract-ocr)

SECRETARY OF WAR

1000 DEFENSE PENTAGON

WASHINGTON, DC 20301-1000

DEC - 9 2025

MEMORANDUM FOR ALL DEPARTMENT OF WAR PERSONNEL

SUBJECT: Harness Artificial Intelligence Now with GenAl

I am pleased to introduce GenAl.mil, a secure generative artificial intelligence (Al) platform for every member of the Department of War. It is live today and available on the desktops of all military personnel, civilians, and contractors. With this launch we are taking a giant step toward mass Al adoption across the Department. This tool marks the beginning of a new era where every member of our workforce can be more efficient and impactful.

The first GenAl platform capability is Google Gemini, a frontier Al application that can help you write documents, ask questions, conduct deep research, format content, and unlock new possibilities across your daily workflows. Gemini is the first of several enterprise Al applications that will be rolled out on the GenAI platform. It is secure, certified up to Impact Level 5 (ILS), and is fully authorized to handle CUI.

Victory belongs to those who embrace real innovation. Rather than being reliant on the dusty, antiquated systems of a bygone era, we are thinking ahead here in the Department of War. GenAl.mil is part of this monumental transformation. It removes wasted time and focuses more of our energy into decisive results for the warfighter.

Access is straightforward. Navigate to GenAl.mil and you will be able to access the tool with your CAC. The platform is certified secure for operational use on NIPR.

I expect every member of the Department to log in, learn it, and incorporate it into your workflows immediately. Al should be in your battle rhythm every single day.

It should be your teammate. By mastering this tool, we will outpace our adversaries. The power is now in your hands.

memo via https://xcancel.com/kenklippenstein/status/1998829304856068344

  • Bakkoda@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Literally hard piping the entirety (remainder) of military intelligence to the highest bidder. Goddamn America is gonna kill America in the most American way.

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    “dEpArTmEnT oF wAr!”

    I expect this to be written in sharpie on a cardboard box with the letter “r” backwards.

    What a bunch of fucking children.

    I can’t imagine the level of reality one must suspend to live with the shame and embarrassment that comes from having voted for this trash.

    • YiddishMcSquidish@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      This is exactly it, isn’t it? Force integration, make it critical and boom “too big to fail” and unlimited bailouts.

  • CerebralHawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    First of all, fucking LOL at Google Gemini being secure. They are really setting themselves up for failure here. But something tells me they don’t care and they never will because they never will be held accountable for this colossal fuckup Stevie Wonder could see a mile off.

    Second, he’s right that wars will be won by those who embrace innovation. Pretending AI doesn’t exist or hating it isn’t a great strategy in a war with an opponent who is using it. I’m actually reading a book right now, that takes place in 2026 but was written in, I think 2009? And it’s about a secret military organisation (out of Japan) building AI “people” for the purpose of having them fully man drones. AI-based pilots, basically, but they think they’re real people and are capable of the same level of thought as we are. (Of course this is science fiction, but you think they aren’t thinking about that? Or that it’s even that crazy that it couldn’t happen next year?) (The book is volume 10 of Sword Art Online, which was adapted into the third season of the anime of the same name. They do talk about this stuff in the third and fourth seasons, but the books go way further into detail.)

    So, much as I don’t like it (and think it’s funny that they can trust a data broker’s AI), I agree with Hegseth on that one point. They have to match wits and tech with the enemy or risk being overrun. They don’t have to like the weapons of the battlefield, but they damn sure better be prepared to face them, either with similar armaments of their own, or something different that can counter it (e.g. virus vs virus, or virus vs firewall).

    Edit: But with regards to SAO, I mean we already have self-driving cars, so self-driving drones shouldn’t even be that far off, if we don’t already have them. I think SAO got way too wordy with what a self-driving drone really needs to be. Like Kawahara says it needs to be a full person with morality and all that? No the fuck it doesn’t. It just needs to be able to identify a target and launch weapons at it or crash into it. That’s it.

    • JayleneSlide@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      First of all, fucking LOL at Google Gemini being secure.

      Look, I’m all for hating on AI. I also have zero love for any of the tech giants. But this is a just plain ignorant statement. Google and most every cloud services provider has sovereign cloud offerings. Accessing any data above Unclassified is progressively more difficult. Humans are the weak link in the security chain.

      • CerebralHawks@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Oh, you’re definitely right about humans being the weak link in security. I also don’t doubt that AI can be hardened to be secure.

        I just don’t trust Google. You’re free to if you like, but I won’t do it. That’s not coming from me being a subscriber to the “Fuck AI” community, that comes from me being somewhat of a privacy advocate (at least, for my own, if I’m being honest — though, everyone should recognise their own privacy needs, and work to protect it to the extent they need).

    • cosmicrookie@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      LOL at Google Gemini being secure.

      The Google Gemini you and I use, is not the same Google Gemini that the government or military uses. We have Ms Copilot where I work, and I know that there has gone extensive work and money into the license used, in order to maintain data responsibility and security

      • Arthur Besse@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago
        photo of Microsoft executives after selling your employer a special license to enable copilot to have enhanced security

        "And Then I Said" meme template (1981 photo of President Ronald Reagan, Vice President George H. W. Bush, and several other men in suits all laughing. (no text)

        • Cybersec@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Yeah for real, they don’t give a fuck. You might be better off with security through obscurity using stock stuff.

    • TheOneCurly@feddit.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Absolute techbro brain. Something literally appears in my favorite sci-fi media, therefore it should and must exist even if it’s clearly a bad thing in the context of the media (we have built the torment nexus). I honestly think a lack of media literacy is driving a huge portion of tech nonsense, a complete inability to read anything into a text and understand the societal and human impact of the things being discussed. Instead it’s “I’m all out of big ideas, time to mine 80’s movies for things to pitch to investors”.

      LLMs and genAI in general is a fill in the blank machine. What possible use in military strategy could they provide?

  • grue@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Not only do I expect it to produce nonsensical advice that results in combat losses and other negligence, I also expect it to be used as an excuse to deflect blame for war crimes.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Why did you massacre these unarmed children private?

      I was just following orders sir.

      Who’s orders?

      Grok sir.

      Well in that case, I think that ones still breathing.

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      We are still operating on the basis that carpet bombing works, despite having executed some of the biggest, deadliest, longest, widest, most precise, hottest and expensive bombing campaigns since planes were invented, and, with the exception of dropping nukes on Japan, it has never won a war.

      Not saying AI is smart, but also neither is military command.

      • pdxfed@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Interestingly, producing unbelievable amounts of bombs does help ammunition companies.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I hope the plague wipes put his entire family line. Seriously. Black death? Take this one. We don’t need him.

  • stringere@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Battle rhythm?

    The only rhythm this guy has is the aRHYTHMia his heart goes through when he hasn’t had a drink and the DTs kick in.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      He uses that term as if it’s something that people in the military say. Maybe it is, but it sure seems like a bullshit term.

      But, even if it’s a term they use, is “battle rhythm” sounds like something that might happen, I dunno, during a battle? And if you’re in the middle of a battle, is that really a time when you should be using AI? “Hey ChatGPT, what – BOOM – coordinates should I tell them to use — ratatatatatatat – for the – POW – artillery strike!?”

      • Lushed_Lungfish@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        It is a term used by the military. Basically it refers to a repetitive workflow. So for instance you have this meeting which occurs every month that requires these deliverables which then feeds into this quarterly meeting which spawns these action items that provide the deliverables to this semi annual meeting which will then trigger a series of tasks that need to be completed in order to prepare for this certain operation.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          So, not battle related in any way? It’s like one of those guys with wrap-around shades who records tiktoks in his truck talking about his warrior mindset?

          • Lushed_Lungfish@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            It can be, but that’s not the context in which it is being used here. The actual warfighting significance of the “battle rhythm” is derived from the older OODA (Observe, Orient, Decide, Act) Loop concept which was how a conflict was to be prosecuted until you were victorious. The idea being that if you could get through your OODA loop faster and more correctly than the opposing side you would outmaneuver and outfight them.

            You would thus have a warfighting battle rhythm which set up the inputs that would feed into a however you were processing the information leading to a decision point that would result in an executable plan to be carried out. You would then observe the follow on impacts of that execution and then the process starts all over again.

            The crux of the idea is that, due to the abundance of practical cases and information gleaned from various exercises, you know exactly how fast you can gather the info, process it, decide and execute and this you can set up a timeline for your entire operation.

            Never turns out that way in practice though due to what our friend Mr. Clausewitz referred to as “friction”. It can also lead to indecision as you get stuck on the “observe” and “decide” parts and then folks start chucking the responsibility for that decision upwards as they seek a perfect solution. Which is why I tend to advise “an okay plan applied immediately and vigorously is far better than a perfect plan ten minutes too late”.

            • merc@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              4 months ago

              OODA (Observe, Orient, Decide, Act)

              Or, as it was known in non-military circles back in the day: “Look before you leap”.

              how a conflict was to be prosecuted until you were victorious

              Or dead.

              if you could get through your OODA loop faster and more correctly than the opposing side you would outmaneuver and outfight them.

              Suuuuure…

              Which is why I tend to advise “an okay plan applied immediately and vigorously is far better than a perfect plan ten minutes too late”.

              Which obviously depends on what you’re planning. If you’re planning D-Day, another 10 minutes to get a perfect plan is worth the wait. If you’re planning how you’re going to attack a machine gun nest that’s currently shooting at you, 10 minutes of being shot at might be too long.

              I get that you need to find a balance between completely winging it when planning or fighting a war, vs being caught in analysis paralysis. And, that the more experienced you are, the more you can figure out the optimal balance between the two, and that allows you to be more predictable, which allows higher-ups to have more consistent plans. IMO, this just makes the idea of using AI in your “battle rhythm” even more stupid. Take something where decades of institutional experience allows you to predict a certain “rhythm”, now throw AI into the mix and its ability to quickly spit out a plausible looking output that’s answer-shaped and you either have to explicitly trust the magic 8-ball’s output, or you have to spend an unpredictable amount of time going over the output to see if it is flawed. Either way, you disrupt this rhythm that’s apparently so important.

              • Lushed_Lungfish@lemmy.ca
                link
                fedilink
                English
                arrow-up
                0
                ·
                4 months ago

                Actually yes. A not insignificant part of battle planning is to try and disrupt the opposing side’s OODA loop. The whole idea was to make it so you could react faster and more appropriately.

                Lots of the tools and information processing systems used in the modern battlefield are designed to quickly take raw data and present that into a format which the decision maker can use to actually make the correct decision.

                Which is why I am leery of using AI in this fashion. Oh sure, if you want to use it to create a briefing note on the forecasted widget usages, meeting minutes about the feasibility of a Christmas party at Montana’s or so on, that’s fine but to actually parse and data that will result in application of lethal force is a whole different kettle of fish.

                Now yes, there are currently systems that are Auto Engage, but they are very much not AI and the only thing they are generally used for is anti air defense where you have a very limited window in which to successfully prosecute a threat.

                • merc@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  4 months ago

                  A not insignificant part of battle planning is to try and disrupt the opposing side’s OODA loop

                  That concept has existed long before anyone had ever heard of an “OODA loop”. Sun Tzu wrote about “disrupting the opposing side’s OODA loop” centuries ago, using different terms.

                  Even in WWI it was smoke to prevent them from seeing (observing) clearly. Artillery to disrupt things so they can’t make good plans (which was labelled as “orient” for some strange reason). Barbed wire to prevent them from moving, thereby disrupting the “act” phase. Fundamentally since OODA is such a non-obvious acronym for such an obvious and all-encompassing subject, you can frame everything in terms of “disrupting an OODA loop”, but that doesn’t mean talking about OODA loops is insightful in any way.

                  Also, fundamentally that’s all secondary to having more troops, greater firepower, more range, etc. If one side has a phalanx and the other side has an attack helicopter, the battle isn’t going to come down to whoever has the least disrupted OODA loops.