You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
157
+
</HelperText>
158
+
</Stack>
159
+
160
+
<StackmaxWidth={'560px'}>
161
+
<HStackjustify='between'align='center'>
162
+
<LabelhtmlFor='top_p'>Top P <Textas='span'color='gray'weight='regular'>(Default: 1)</Text></Label>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
170
+
<br/><br/>
171
+
We generally recommend altering this or temperature but not both.
Copy file name to clipboardExpand all lines: frontend/src/types/RavenBot/RavenBot.ts
+8-2Lines changed: 8 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -26,12 +26,18 @@ export interface RavenBot{
26
26
module?: string
27
27
/** Is AI Bot? : Check */
28
28
is_ai_bot?: 0|1
29
-
/** Debug Mode : Check - If enabled, stack traces of errors will be sent as messages by the bot */
30
-
debug_mode?: 0|1
31
29
/** Model : Data */
32
30
model?: string
31
+
/** Temperature : Float - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. */
32
+
temperature?: number
33
+
/** Debug Mode : Check - If enabled, stack traces of errors will be sent as messages by the bot */
34
+
debug_mode?: 0|1
33
35
/** Reasoning Effort : Select - Only applicable for OpenAI o-series models */
34
36
reasoning_effort?: "low"|"medium"|"high"
37
+
/** Top P : Float - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
38
+
39
+
We generally recommend altering this or temperature but not both. */
40
+
top_p?: number
35
41
/** OpenAI Assistant ID : Data */
36
42
openai_assistant_id?: string
37
43
/** Enable Code Interpreter : Check - Enable this if you want the bot to be able to process files like Excel sheets or data from Insights.
Copy file name to clipboardExpand all lines: raven/raven_bot/doctype/raven_bot/raven_bot.json
+24-2Lines changed: 24 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -18,9 +18,12 @@
18
18
"module",
19
19
"ai_tab",
20
20
"is_ai_bot",
21
-
"debug_mode",
22
21
"model",
22
+
"temperature",
23
+
"column_break_ebil",
24
+
"debug_mode",
23
25
"reasoning_effort",
26
+
"top_p",
24
27
"ai_section",
25
28
"openai_assistant_id",
26
29
"enable_code_interpreter",
@@ -188,13 +191,32 @@
188
191
"fieldtype": "Data",
189
192
"label": "OpenAI Vector Store ID",
190
193
"read_only": 1
194
+
},
195
+
{
196
+
"default": "1",
197
+
"description": "What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
198
+
"fieldname": "temperature",
199
+
"fieldtype": "Float",
200
+
"label": "Temperature",
201
+
"non_negative": 1
202
+
},
203
+
{
204
+
"fieldname": "column_break_ebil",
205
+
"fieldtype": "Column Break"
206
+
},
207
+
{
208
+
"default": "1",
209
+
"description": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both.",
0 commit comments