-
Notifications
You must be signed in to change notification settings - Fork 46
Feature/ait 87 OpenAI message per response guide #3087
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/ait 87 OpenAI message per response guide #3087
Conversation
- Uses a further-reading callout instead of note - Removes repeated code initialising Ably client (OpenAI client already instantiated)
Adds an anchor tag to the "Client hydration" heading
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
paddybyers
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
| 5. Select the "Message annotations, updates, deletes and appends" option from the list. | ||
| 6. Click "Create channel rule". | ||
|
|
||
| The examples in this guide use the `ai:` namespace prefix, which assumes you have configured the rule for `ai:*`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it's worth mentioning that there's nothing specific/magical about using ai as the namespace. It's the kind of thing that people can assume without thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| When a new response begins, publish an initial message to create it. Ably assigns a [`serial`](/docs/messages#properties) identifier to the message. Use this `serial` to append each token to the message as it arrives from the OpenAI model. | ||
|
|
||
| <Aside data-type="note"> | ||
| This implementation assumes each response contains a single `message` type output item. It filters out reasoning tokens and other non-`message` output items. For production use cases with multiple output items or content parts, consider tracking state per response ID and item ID. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth referencing here documentation that suggests how to handle reasoning events? Eg as annotations or as a chain of thought thread?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes definitely - those pages haven't landed yet though, so I've created a ticket to review cross-linking generally once the MVP pages have landed. https://ably.atlassian.net/browse/AIT-244
Similar to the open ai message per token guide, but using the message per response pattern with appends.
0d203e9 to
188365f
Compare
fb0bf3e
into
AIT-129-AIT-Docs-release-branch
Description
Adds an OpenAI message per response token streaming guide.
Similar to the OpenAI message per token guide, but using the message per response pattern with appends.
Checklist