饒口通常是很難唸的詞, 像四十四隻雌獅子. 形容一個字或詞很饒舌的英文是mouthful.
361 total views
要把這個清除動脈的plague,這篇說是可能的,不知道是真的假的。
Eat Unsaturated fat
1. Monounsaturated fats are found in high concentrations in:
2. Polyunsaturated fats are found in high concentrations in
Omega-3 fats are an important type of polyunsaturated fat. The body can’t make these, so they must come from food.
700 total views, 1 views today
Healthy lifestyle choices are key:
If healthy lifestyle changes aren’t enough to control high triglycerides, your doctor might recommend:
267 total views
*我不是醫生。這只是我的個人筆記。要做任何改變,先問問你的醫師。
拖了一年,終於去看心臟科醫師,他要我做CT Calcium Score。做了,結果是心臟動脈有微度(mild)的plague,其實快要到中度,到了中度,3-5年內有心臟病的風險。
心臟病?從來也沒想過這會發生在我身上。
於是去Google,有沒有可能把plague溶掉(dissolve),當然主流的是說不可能,只能減緩。當然一定有非主流的醫生(**)說這是可以做到的, 像這一篇文章的作者。他在2015做了CMIT(Carotid Intima-Media Thickness), 發現他血管有沉積(plagues), 他的動脈像是73歲人, 而他當時年紀是57. 他說他做了以下的事,清除了血管的沉積物,一年後,再做一次CMIT, 把動脈年齡降到52.
**我沒去查證他們是不是真的醫生。
310 total views
From this article.
zero-shot chain of thought: asking ChatGPT “What is the fourth word in the phrase ‘I am not what I am?’ ChatGPT: not. The author said a little of zero-shot chain of thought can help get the right answer.
ChatGPT does perform much better when you provide more context and specific examples.
…
In the language models developed by OpenAI, there are two primary techniques used to activate its vast store of knowledge and improve the accuracy of responses to prompts.
These techniques are known as “few-shot learning” and “fine-tuning”.
…
The oddly named “few-shot” learning, is where the model is trained to recognise and classify a new object or concept with a small number of training examples, typically less than 10, but numbers can vary. For learning where there is only one example, you might also hear it being called “one-shot” learning.
Few-shot learning in OpenAI models can be implemented at both the ChatGPT prompt, as well as programmatically by calling the OpenAI API (Application Programming Interface) “completion” endpoint.
…
The “genius in the room” mental model
Jessica recommends three best practices when constructing a prompt to extract the most relevant answers from ChatGPT, as follows:
…
Zero-Shot Chain of Thought (CoT) Prompting
You may also hear people talking about “zero-shot” learning, where a model is able to classify new concepts or objects that it has not encountered before.
To use this technique, all you need do is to append the words:
“let’s think step by step”, or
“thinking aloud”
At the end of your prompt!
…
Fine-tuning
A minimum of a few hundred examples should be your starting point
319 total views