久しぶりですね。皆さんは元気だと希望します。じゃあ、日本語を勉強しましょうか? 14.I.B [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanese](https://kbin.social/tag/japanese) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) B. Items marked with O are what you wanted when you were a child, and items marked with ✗are what you did not want. Make sentences using ほしい。 > > > Example: > O・子供の時、本がほしかったです。 > ✗・子供の時、マフラーがほしくなかったです。 > > 1. O・テレビーゲーム 子供の時、テレビーゲームがほしかったです。 2. ✗・指輪 子供の時、指輪がほしくなかったです。 3. ✗・腕時計 子供の時、腕時計がほしくなかったです。 4. O・玩具 子供の時、玩具がほしかったです。 5. ✗・花 子供の時、花がほしくなかったです。 [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

1
0
"Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearKB
Jump
RE: Is Ernest still here?
  • daredevil daredevil Now 100%

    Get well soon, and thanks for the update.

    3
  • 14.I.A [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanese](https://kbin.social/tag/japanese) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) I. 日本がほしいです。 A. Items marked with O are what you want, and items marked with X are what you do not want. Make sentences using ほしい。 > > > Example: > [O] 本・本がほしいです。 > [X] マフラー・マフラーがほしくないです。 > > 1. [o] お金・お金がほしいです。 2. [x] セーター・セーターはほしくないです。 3. [x] パソコン・パソコンはほしくないです。 4. [o] バイク・バイクがほしいです。 5. [x] ぬいぐるみ・ぬいぐるみはほしくないです。 [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0

    13.VII.B [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanese](https://kbin.social/tag/japanese) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) B. Talk about part-time jobs 1. アルバイトをしたことがありますか。 > > > はい、ことがありました。 > > 1. いつしましたか。 > > > 先年です。 > > 1. どんなアルバイトでしたか。 > > > 日本レストランで給仕をしました。日本語練習ができるので、面白いアルバイトだと思いました。 > > 1. 一週間に何日働きましたか。 > > > 一週間三まで四日働きました。 > > 1. 一週間にいくらもらいましたか。 > > > 多分千ドルごろだけ。でも働くの後で、無料食べ物をくれます。悪くなかったです。 > > 1. どんなアルバイトがしてみたいですか。どうしてですか。 > > > 分かりません。今仕事が好きなので、あまりこのことについて考えりません。ラッキですよね。 > > [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0
    What is going on with kbin - a week has passed with no sign of any life
  • daredevil daredevil Now 100%

    defaming them without due diligence, think about that before continuing

    The irony here is unbelievable rofl you can't make this up. My previous statement was calling you childish and desperate for attention. Thanks for reminding me of that fact, so I can stop wasting my time. It is very clear you're not interested in a genuine and constructive conversation.

    2
  • What is going on with kbin - a week has passed with no sign of any life
  • daredevil daredevil Now 100%

    It's not one week of inactivity, is has been going on for months

    Looks at 2 months straight of kbin devlogs since October, when the man was having pretty significant personal issues

    Not to mention he was: recently sick; tended to financial issues, and personal matters; formalities relating to the project. This isn't even mentioning that he communicated this in the devlog magazine. Or the fact that he has implemented suggestions multiple times at the request of the community to enhance QoL, and allowed users to have agency in making mod contributions.

    You might want to take your own advice. This has also allowed me to revise my earlier statement. You people are actually insane.

    2
  • What is going on with kbin - a week has passed with no sign of any life
  • daredevil daredevil Now 100%

    every post I see from them further paints them as very childish and desperate for attention.

    4
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearKB
    Jump
    Cross posting norms and etiquette?
  • daredevil daredevil Now 100%

    I just use it to bring awareness to similar magazines/communities across the fediverse

    5
  • What is going on with kbin - a week has passed with no sign of any life
  • daredevil daredevil Now 83%

    Agreed, every post I see from them further paints them as very childish and desperate for attention.

    4
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearKB
    Jump
    To those genuinely interested in moderating
  • daredevil daredevil Now 100%

    Since @ernest is the owner for that magazine, I think moderator requests have to go through him. Unfortunately, he was dealing with a slight fever awhile ago, and has been dealing with financial planning and project formalities awhile back as well. Hopefully things haven't gotten worse. For what it's worth, I think it's great you're eager to contribute. There have definitely been some spam issues recently. I hope a solution can be found soon. Maybe even something like posts which have a <10% upvote-to-downvote ratio over a day/week can be temporarily quarantined until an admin approves of it. Anyways, best of luck with modship.

    2
  • 15.4.I [#genki2wb](https://kbin.social/tag/genki2wb) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanese](https://kbin.social/tag/japanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) I. Make sentences using the cues. 1. 食堂があります。 > > > これは食堂がある建物です。 > > 1. 私は先生に借りました。 > > > これは私が先生に借りた辞書です。 > > 1. 父は私にくれました。 > > > これは父が私にくれたです。 > > 1. 友達は住んでいます。 > > > これは友達が住んでいるです。 > > 1. 最近で来ました。 > > > これは最近で喫茶店に来たです。 > > [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0
    I feel like I'm missing out by not distro-hopping
  • daredevil daredevil Now 100%

    I've only felt the need to change distros once, from Linux Mint to EndeavourOS, because I wanted Wayland support. I realize there were ways to get Wayland working on Mint in the past, but I've already made the switch and have already gotten used to my current setup. I personally don't feel like I'm missing out by sticking to one distro, tbh. If you're enjoying Mint, I'd suggest to stick with it, unless another distro fulfills a specific need you can't get on Mint.

    2
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearKB
    Jump
    What's the quickest way to find a magazine/community you've subscribed to?
  • daredevil daredevil Now 100%

    You could make a (private) collection for your subscribed magazines. Not exactly the feature you were asking for, but it's an option to curate your feed. On Firefox I have various collections bookmarked and tagged so accessibility is seamless.

    2
  • 15.3.II [#genki2wb](https://kbin.social/tag/genki2wb) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanese](https://kbin.social/tag/japanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) 1. 来週、大きい台風が着ます。何しておきますか。 > > > お金を下ろしたり、食べ物を買ったり、服を準備しておきたりなどです。 > > 1. 来週、試験があります。何をしておきますか。 > > > 教科書を復習したり、先生に聞いておきたりなどです。 > > 1. 今度の休みに富士山に登ります。何をしておかなければいけませんか。 > > > 水を忘れなかったり、厚い服を持っていておかないといけません。 > > [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0

    15.3.I [#genki2wb](https://kbin.social/tag/genki2wb) [#learnjapanese](https://kbin.social/tag/learnjapanese) [#japanese](https://kbin.social/tag/japanese) [#japanesepractice](https://kbin.social/tag/japanesepractice) I. Read the first half of the sentences carefully. Then choose from the list what you will do in preparation and complete the sentences, using 〜ておきます。 1. あの店ではカード(Credit card)が使えないので、\_\_\_\_ > > > お金を下ろしておきます。 > > 1. 来週、北海道を旅行するので、 > > > 旅館に予約しておきます。 > > 1. 電車で東京に行くので、 > > > 電車予定を調べておきます。 > > 1. 今度の週末、友達とカラオケに行くので、 > > > 新しい歌を練習しておきます。 > > 1. 週末デートをするので、 > > > 素敵レストランを探しておきます。 > > [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0

    13.VII.A [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanese](https://kbin.social/tag/japanese) [#learnjapanese](https://kbin.social/tag/learnjapanese) VII. まとめの練習 A. Answer the following questions 1. 子供の時に何ができましたか。何ができませんでしたか。 子供の時本が読めれたが、夜は外で一人で遊べませんでした。 1. 百円で何が買えますか。 多分ノートが買えます。 1. どこに行ってみたいですか。どうしてですか。 日本語を勉強するのが有益なので、日本に行ってみたいですよ。 1. 子供の時、何がしてみたかったですか。 時々漫画を書いてみたかったです。 1. 今、何がしてみたいですか。 もっと日本語が上手になりたいです。 1. 一日に何時間ぐらい勉強しますか。 最近、一時間まで二時間ぐらい勉強します。けど、すぐもっと練習したいです。お互い頑張りましょうか。 1. 一週間に何回レストランに行きますか。 最近、一週間に三まで四回に行きます。多分行くすぎると思います。 1. 一か月にいくらぐらい使いますか。 だけ少し使います。 [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0

    久しぶりですね。皆さんは元気だと希望します。このごろすごくいそがしいので、すみません皆さんと一緒に話せませんでした。さて、日本語を練習しましょうか。 13.VI.A [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanese](https://kbin.social/tag/japanese) [#japanesereview](https://kbin.social/tag/japanesereview) A. Look at the following pictures and make sentences as in the example. > > > Ex. Twice a day > 一日に二回食べます。 > > 1. Brush teeth three times a day. 一日三回歯を磨きます。 2. Sleep seven hours a day. 一日七時間寝ます。 3. Study three hours a day. 一日三時間勉強します。 4. Clean room once a week. 一週間一回部屋で掃除する。 5. Do laundry twice a week 一週間二回洗濯をします。 6. Working part-time three days a week. 一週間三回バイトで働きます。 7. I go to school 5 days a week. 一週間五日学校へ行きます。 8. I watch a movie once a month. 一ヶ月一回映画を見ます。 [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearKB
    Jump
    To all moderators: Here is how you can add banner using CSS very easily to your Kbin magazines!
  • daredevil daredevil Now 100%

    I imagine something like this

    Duly noted, I missed a line of text. Won't try to help in the future

    1
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearOL
    Johnny Depp &amp; Matthew Perry (1988)
    60
    4
    terminaltrove.com

    Terminal Trove showcases the best of the terminal, Discover a collection of CLI, TUI, and more developer tools at Terminal Trove.

    49
    1
    Useful resources [Updated 2023-12-16]
  • daredevil daredevil Now 100%

    Starting to wonder if I should just make a doc at this point...

    1
  • www.youtube.com

    イニシエノウタ/デボル · SQUARE ENIX MUSIC · 岡部 啓一 · MONACA NieR Gestalt & NieR Replicant Original Soundtrack Released on: 2010-04-21

    6
    0
    What shows have extended therapy arcs?
  • daredevil daredevil Now 100%

    Came here with this show in mind. Would recommend.

    2
  • B. Answer the following questions. Use \~なら whenever possible. > > > Example: > Q: スポーツをよく見ますか。 > A: ええ、野球なら見ます。/ いいえ、見ません。 > > 1. Q: 外国語ができますか。 A: ええ、ちょっと日本語ができます。 2. Q: アルバイトをしたことがありますか。 A: ええ、バイトならしたことがありました。 3. Q: 日本の料理が作れますか。 A: 中国の料理なら作れるが、日本語の料理は作れません。 4. Q: 有名人に会ったことがありますか。 A: ええ、有名人なら会ったことがありました。 5. Q: 楽器ができますか。 A: ええ、バイオリンならできます。 6. Q: お金が貸せますか A: ええ、お金なら貸せます。

    1
    0
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearUN
    Jump
    [KDE Plasma] Minimal Dark Customization
    Everybody’s talking about Mistral, an upstart French challenger to OpenAI
  • daredevil daredevil Now 100%

    I haven't, but I'll keep this in mind for the future -- thanks.

    1
  • Everybody’s talking about Mistral, an upstart French challenger to OpenAI
  • daredevil daredevil Now 100%

    I believe I was when I tried it before, but it's possible I may have misconfigured things

    1
  • Everybody’s talking about Mistral, an upstart French challenger to OpenAI
  • daredevil daredevil Now 100%

    I'll give it a shot later today, thanks

    edit: Tried out mistral-7b-instruct-v0.1.Q4_K_M.ggufvia the LM Studio app. it runs smoother than I expected -- I get about 7-8 tokens/sec. I'll definitely be playing around with this some more later.

    3
  • Everybody’s talking about Mistral, an upstart French challenger to OpenAI
  • daredevil daredevil Now 100%

    That's good to know. I do have 8GB VRAM, so maybe I'll look into it eventually.

    2
  • Everybody’s talking about Mistral, an upstart French challenger to OpenAI
  • daredevil daredevil Now 100%

    I'm looking forward to the day where these tools will be more accessible, too. I've tried playing with some of these models in the past, but my setup can't handle them yet.

    3
  • arstechnica.com

    On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation. Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.) Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below. A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral. The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now." LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc." ### Mixture of experts ### So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts. In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything. OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B." Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

    116
    26
    arstechnica.com

    On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation. Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.) Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below. The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now." LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc." ### Mixture of experts ### So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts. In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything. OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B." Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

    3
    0
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLI
    Linguistics daredevil Now 100%
    Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English.
    arstechnica.com

    On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a "mixture of experts" (MoE) model with open weights that reportedly truly matches OpenAI's GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI's Andrej Karpathy and Jim Fan. That means we're closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation. Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral's models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context "weights" are the computer files that represent a trained neural network.) Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta's much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI's GPT-3.5 on certain benchmarks, as seen in the chart below. The speed at which open-weights AI models have caught up with OpenAI's top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, "Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now." LexicaArt founder Sharif Shameem tweeted, "The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device." To which Andrej Karpathy replied, "Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc." ### Mixture of experts ### So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as "experts," for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts. In layperson's terms, a MoE is like having a team of specialized workers (the "experts") in a factory, where a smart system (the "gate network") decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything. OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, "it is not all 7B params that are being 8x'd, only the FeedForward blocks in the Transformer are 8x'd, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B." Mixtral is not the first "open" mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It's out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

    1
    0

    **II. Complete the dialogue, using the volitional + と思っています。** かな:1. (What do you intend to do for the next holiday?) ::: spoiler answer 来休みはどうするつもりですか? ::: ジョン: 2. \_\_\_\_\_\_\_\_ ので、3.\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_。 ::: spoiler answer 最近家族をあまり見ないので、会うつもりです。 ::: かな:いいですね。 ジョン:かなさんは? かな:4.\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_。 ::: spoiler answer 先学期の成績は良くないので、よく勉強するつもりです。 ::: ジョン:そうですか。

    1
    0
    Best/usable free Evernote alternative
  • daredevil daredevil Now 100%

    Yeah I wanted to use it for work until I read that. Instead I'm just using Vimwiki since I really only need markdown and linking.

    1
  • ### 15-2 Volitional Form + と思っています ### **I. Read the first half of the sentences carefully. Then, choose what you are going to do from the list and complete the sentences, using the volitional + と思っています。** * ボランティア活動に参加する * 両親にお金を借りる * 練習する * お風呂に入って早く寝る * 保険に入る * 花を送る 1. 将来病気になるかもしれないので、 ::: spoiler spoiler 保険に入れると思っています。 ::: 2. お金がないので、 ::: spoiler spoiler 両親にお金を借りられると思っています。 ::: 3. 一日中運動して疲れたので、 ::: spoiler spoiler お風呂に入って早く寝られると思っています。 ::: 4. 夏休みに時間があるので、 ::: spoiler spoiler 練習できると思っています。 ::: 5. 母の日に、 ::: spoiler spoiler 花を送れると思っています。 ::: 6. 自転車に乗れないので、 ::: spoiler spoiler ボランティア活動に参加できないと思っています。 :::

    1
    0

    またちょっと練習しました。多分この練習がもうしたと思ういます。しかし、復習はいい習慣だと思います。 13.V.A Answer the questions as in the example > > > Example > Q: メアリーさんは今朝、コーヒーを飲みましたか。 > A: (O tea X coffee) --\> 紅茶なら飲みましたが、こーひーは飲みませんでした。 > > 1. Q: メアリーさんはバイクに乗れますか。 A: (O bicycle X motorbike) --\> 自転車なら乗れるが、バイクは乗れなかったです。 1. Q: メアリーさんはニュージランドに行ったことがありますか。 A: (O Australia X New Zealand) --\> オーストラリアなら行ったことがあるが、ニュージランドにことがありませんでした。 1. Q: メアリーさんはゴルフをしますか。 A: (O tennis X golf) --\> テニスならするが、ゴルフをしませんでした。 1. Q: けんさんは日本の経済に興味がありますか。 A: (O history X economics) --\> 日本の歴史なら興味があるが、経済が興味がありませんでした。 1. Q: けんさんは彼女がいますか。 A: (O friend X girlfriend) --\> 友達ならいるが、彼女がいませんでした。 1. Q: けんさんは土曜日に出かけられますか。 A: (O Sunday X Saturday) 日曜日なら出かけられるが、土曜日が出かけられません。

    1
    0

    久しぶりですね〜皆さんは元気だったと希望しますよ。今日はちょっと違い物を試します。/kbinのユーザーはよくスレのほうがmicroblogより好きなので、スレで日本語の練習を作りました。 ところで、今週は新しい仕事を始まったので、最近すごくいそがしいですよ。すごく面白いと思っていました。じゃあ、ちょっと日本語を練習しましょうか? Respond to the following sentences using \~てみる > > > Example > A: この服は素敵ですよ。 > B: じゃあ、着てみます。 > > 1. A: 経済の授業は面白かったですよ。 B: じゃあ、取ってみます。 1. A: あの映画を見て泣きました。 B: じゃあ、見てみます。 1. A: この本は感動しました。 B: じゃあ、読んでみます。 1. A: このケーキは美味しいですよ。 B: じゃあ、食べてみます。 1. A: 東京は面白かったですよ。 B: じゃあ、旅行してみます。 1. A: このCDは良かったですよ。 B: じゃあ、聞いてみます。 1. A: この辞書は便利でしたよ。 B: じゃあ、読んでみます。

    1
    0

    13.III.B [#genki2textbook](https://kbin.social/tag/genki2textbook) [#japanesereview](https://kbin.social/tag/japanesereview) [#japanese](https://kbin.social/tag/japanese) B. Look at the pictures in [A](https://kbin.social/m/LearnJapanese/p/3316395/13-III-A-genki2textbook-japanesereview-japanese-Describe-the-following-pictures-using-sou-Example-kono-shou-siha-mei-weishisoudesu) and make sentences as in the example. > > > Example > 寿司 -\> 美味しそうな寿司です。 > > 1. ケーキ 甘そうなケーキです。 2. カレー 辛そうなカレーです。 3. 服 古そうな服です。 4. 先生 厳しそうな先生です。 5. 時計 新しそうな時計です。 6. ヤクザ 怖そうなヤクザです。 7. 男の人 寂しそうな男の人です。 8. 女の人 嬉しそうな女の人です。 9. おじいさん 元気そうなおじいさんです。 10. おばあさん 意地悪そうなおばあさんです。 11. 女の人 優しそうな女の人です。 12. 弁護士 頭がよさそうな弁護士です。 13. 学生 眠そうな学生です。 14. セーター あたたかそうなセーターです。 15. 子供 悲しそうな子供です。 [\#LearnJapanese](https://kbin.social/tag/LearnJapanese)

    1
    0
    daredevil Now
    127 215

    daredevil

    daredevil@ kbin.social

    I'm just an internet explorer.

    日本語 OK • 中文 OK • tiếng việt OK

    @linguistics@cats@dogs@learnjapanese@japanese@residentevil@genshin_impact@genshinimpact@classicalmusic@persona@finalfantasy

    #linguistics #nlp #compling #linux #foss