Note that by default, Ollama downloads q4_0 quantized models. It means they're quick but can suffer quite a bit in generation quality compared to recommended q5_K_M or larger. Also, at the moment the Gemma 2 27b Ollama models are broken: they tend to keep going indefinitely.
@@VeevFloy when you generate something with it, it will just keep outputting text endlessly, it will never stop. you have to manually set a token limit.
The lcm program looks OK to me, not sure why that environment would give the generated code a fail? This computes the correct LCM for me: from math import gcd def lcm(nums): lcm = nums[0] for i in range(1, len(nums)): lcm = (lcm * nums[i]) // gcd(lcm, nums [i]) return lcm