Тёмный

Gemma 2 AI Released! Is It Really That Good? 

Mervin Praison
Подписаться 43 тыс.
Просмотров 7 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 19   
@supercurioTube
@supercurioTube 3 месяца назад
Note that by default, Ollama downloads q4_0 quantized models. It means they're quick but can suffer quite a bit in generation quality compared to recommended q5_K_M or larger. Also, at the moment the Gemma 2 27b Ollama models are broken: they tend to keep going indefinitely.
@VeevFloy
@VeevFloy 3 месяца назад
Wow! Thanks for the warning! Tell me, please, what do you mean when you say that they work endlessly?
@bodyguardik
@bodyguardik 3 месяца назад
correct. most or runs its start infinite loop of hallucinations
@ds920
@ds920 3 месяца назад
True, no luck with ollama’s / 27B model.
@mycelia_ow
@mycelia_ow 2 месяца назад
I had to manually limit its token size because yeah it just kept going lmao, idk why. Even the 9B Gemma 2 models don't have this "bug".
@mycelia_ow
@mycelia_ow 2 месяца назад
​@@VeevFloy when you generate something with it, it will just keep outputting text endlessly, it will never stop. you have to manually set a token limit.
@drimscape
@drimscape 20 дней назад
gemma 2 is amazing. its fast and cool. i am really happy and impressed after chatting with it
@MeinDeutschkurs
@MeinDeutschkurs 3 месяца назад
There is something wrong with the grmma2 27b on my Mac. The model regularly freaks out during output.
@Gabriel-tp3vc
@Gabriel-tp3vc 3 месяца назад
Known issue, if by "freaks out" you mean never stops.
@MeinDeutschkurs
@MeinDeutschkurs 3 месяца назад
@@Gabriel-tp3vc yes, exactly. I’ll try to use another quantized version.
@mycelia_ow
@mycelia_ow 2 месяца назад
​@@MeinDeutschkurs did you test out different ones? do they all have this issue? I've the same issue on windows using LM studio
@Gabriel-tp3vc
@Gabriel-tp3vc 3 месяца назад
The lcm program looks OK to me, not sure why that environment would give the generated code a fail? This computes the correct LCM for me: from math import gcd def lcm(nums): lcm = nums[0] for i in range(1, len(nums)): lcm = (lcm * nums[i]) // gcd(lcm, nums [i]) return lcm
@TheCopernicus1
@TheCopernicus1 3 месяца назад
Wow Ollama already supported!
@bodyguardik
@bodyguardik 3 месяца назад
not really. its working very unstable
@AnjarMoslem
@AnjarMoslem 3 месяца назад
I'm downloading now, gonna try it on ollama
@mycelia_ow
@mycelia_ow 2 месяца назад
​@@bodyguardik the model itself is shit
@Canna_Science_and_Technology
@Canna_Science_and_Technology 3 месяца назад
It’s an import error. How did it fail?
@john_blues
@john_blues 3 месяца назад
Jumping around way too fast my friend.
Далее
Codestral Mamba: Did it Pass the Coding Test?
8:19
Просмотров 4,1 тыс.
Gemma 2 - Google's New 9B and 27B Open Weights Models
14:30
skibidi toilet multiverse 042 Trailer
01:57
Просмотров 1,9 млн
У КОТЕНКА ПРОБЛЕМА?#cat
00:18
Просмотров 1 млн
Google I/O for Devs - TPUs, Gemma & GenKit
11:01
Просмотров 3,2 тыс.
Demo: Rapid prototyping with Gemma and Llama.cpp
11:37