Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)F

Fushuan [he/him]

@ fushuan @lemm.ee

Posts
1
Comments
624
Joined
3 yr. ago

Huh?

  • Hey, I have trained several models in pytorch, darknet, tensorflow.

    With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There's no randomness unless they specifically add random layers and that's not really a good idea with RNNs it wasn't when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.

  • The model is open, it's not open source!

    How is it so hard to understand? The complete source of the model is not open. It's not a hard concept.

    Sorry if I'm coming of as rude but I'm getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.

  • The training data is NOT right there. If I can't reproduce the results with the given data, the model is NOT open source.

  • The runner is open source, the model is not

    The service uses both so calling their service open source gives a false impression to 99,99% of users that don't know better.

  • The source OP is referring to is the training data what they used to compute those weights. Meaning, petabytes of text. Without that we don't know which content theynused for training the model.

    The running/training engines might be open source, the pretrained model isn't and claiming otherwise is wrong.

    Nothing wrong with it being this way, most commercial models operate the same way obviously. Just don't claim that themselves is open source because a big part of it is that people can reproduce your training to verify that there's no fowl play in the input data. We literally can't. That's it.

  • The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

    When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

    As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

    Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

    This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

  • What most people understand as deepseek is the app thauses their trained model, not the running or training engines.

    This post mentions open source, not open source code, big distinction. The source of a trained model is part the training engine, and way bigger part the input data. We only have access to a fraction of that "source". So the service isn't open source.

    Just to make clear, no LLM service is open source currently.

  • The engine is open source, the model is not.

    The enumqtor is open source, the games it can run are not.

    I don't see how it's so hard to understand.

    They are saying that the model that the engine is running is open source because they released the model. That's like saying that a game is open source because I released an emulator and the exscutable file. It's just not true.

  • I did it years ago when they sent me an email suggesting to do exactly that.

  • You can also register a MFA app and lock recovery codes in your PC.

    This has been announced with enough time, you still have time to download another app like aegis or whatever. This is only for new logins however, you will still have access to bitwarden wherever you are already logged on.

  • You provided a situation where your phone was robbed and you didn't plan for it so you didn't print the relevant information.

    So... Prepare ahead? Go to a relevant office with identification to get access to the relevant tickets again?

    "What can I do if all the tools at my disposal to get the relevant information are stolen?" You get fucked. Idk what else to tell you.

  • The model itself is not open source and I agree on that. Models don't have source code however, just training data. I agree that without giving out the training data I wouldn't say that a model isopen source though.

    We mostly agree I was just irked with your semantics. Sorry of I was too pedantic.

  • On my home PC. Same with the 2fa export of aegis.

    "What if you can't access blah"

    There's a limit to interoperability, if you want access to everything everywhere even when you lose access for whatever reason, you will have to concede security.

    You could save a keepass file with secure notes of both the bitwarden 2fa and recovery codes and save it in drive or whatever, you don't need passwords nowadays to access the Google account.

    "But what if I lose access to my phone?"

    Well you are fucked, what else do you want? I guess you could print the recovery keys and store them in a secured box at home.

    Edit: I read further down that your comment was meant to incite other to actually think and do stuff. Sorry if I came of rude.

  • That's wrong by programmer and data scientist standards.

    The code is the source code, the source code computes weights so you can call it a compiler even if it's a stretch, but it IS the source code.

    The training set is the input data. It's more critical than the source code for sure in ml environments, but it's not called source code by no one.

    The pretrained model is the output data.

    Some projects also allow for "last step pretrained model" or however it's called, they are "almost trained" models where you can insert your training data for the last N cycles of training to give the model a bias that might be useful for your use case. This is done heavily in image processing.

  • What's a goldfish and wrongly interpreted dnd rules doing up there?!?

  • Oh yeah your joke was correctly conveyed dw. I guess my totally valid but probably marginally nonexistant scenario wasn't as funny for the public haha

  • You could also be cishet and into your gf who hasn't fully transitioned and just be into her body. Anyone that tells me that that's not het behaviour can fuck off <3

  • I wasn't talking about a company doing a workaround, but people buying things from lverseas instead of buying things manufactured locally that needed tariffed parts.

    A company hat manufactures smart bands in the US will have to increase the price to offset the chip cost increase, but xiaomi surely won't so the "local" choice will be even more undesirable. I know that China has a global yoke on smart bands but you get the idea.

  • B-but GI Joe! (Genocide Inducing Joe Biden)

    /s