Langboat Paper Assistant (LPA)
Langboat Paper Assistant (LPA)
澜舟论文助写(Langboat Paper Assistant,简称 LPA)是一个帮助用户写英文论文的 AI 产品,LPA 聚焦用户的 2 个痛点:
针对上述痛点,产品主要提供 2 大功能点:
组句
:输入几个关键词,它会帮你组织成一个合理的句子,作为参考;续写
:输入一句话,它会推荐接下来怎么写,通过生成的句子推荐用词、句型和调子。LPA 的核心目标是帮助作者提供论文风格建议,以及基于 AI 生成写作例句供作者参考。作者基于 AI 生成的例句可以进行修改,最终形成自己的句子。
目前 LPA 仅提供了用于自然语言处理技术领域的英文论文助写。有两种助写功能:
第一步:先点击 建议
按钮,用户可以输入自己已经想好的内容,以关键词的形式进行输入。
第二步:点击 Recommend
按钮,澜舟论文助写工具会基于这些关键词并以顶会论文的风格进行建议和组织,帮助作者快速将已有想法转化成更规范的文字。
第三步:用户选择使用助写生成的内容或者再次生成,然后编辑使用。
第一步:用户直接撰写或者把写好的文章开头复制粘贴到编辑页面中,点击续写按钮图片或者快捷键续写。 第二步:LPA 工具会结合自然语言处理顶会论文的写作风格给出建议信息,用户来选择是否使用此段文字或者再次推荐,并且在此基础上编辑后使用。
下面展示澜舟论文助写在多个 AI 领域的写作效果,包括 续写
和 组句
的效果。
输入上文 | 澜舟 LPA 生成结果 |
---|---|
Making language models bigger does not inherently make them better at following a user’s intent. For example, large language models can generate outputs that are untruthful, | inconsistent, and often lack a coherent personality. A growing body of work has shown that these weaknesses can be remedied by using human examples as prompts to guide the generation. One could use this line of work to guide the rest of our models towards more understandable texts, and perhaps better coherence. However, we can be somewhat assured that our language models are following the desired intents to the best of our abilities. In this paper, we explore two mechanisms for using human-written examples to guide generation: using a large language model as a decoder, and adding a classifier to the language models. We conduct a thorough analysis of these methods on the tasks of open domain question answering and dialog modeling, comparing with standard fine-tuning, as well as other baselines. In both cases, our methods show consistent improvements over strong fine tuning baselines, and outperforms existing state of the art approaches. |
输入上文 | 澜舟 LPA 生成结果 |
---|---|
Image synthesis is one of the computer vision fields with the most spectacular recent development, but also among those with the greatest computational demands. | The recent generative adversarial network models allow for high-quality image synthesis, but their training is often time consuming and prone to failure. One of the main challenges is how to use multiple generators in a way that maintains consistency between generators, while maintaining image quality. In this work, we tackle the consistency problem of GANs by introducing a novel differentiable image synthesis method and its application in 3D-aware image generation. The novel differentiability comes from a new formulation called gradient-matching that naturally extends the gradient optimization of GANs to the case when the image is a 3D array. |
输入上文 | 澜舟 LPA 生成结果 |
---|---|
These pre-trained audio encoders learn high-quality representations of speech, but because they are purely unsupervised they lack an equivalently performant decoder mapping those representations to usable outputs. | We find that this limitation can be removed with a small amount of additional task-specific supervised fine-tuning. We observe consistent and significant improvements across a wide array of tasks, including the recent challenging LRS3 speech translation benchmark, and our fine-tuned models are able to outperform comparable prior work. Furthermore, the benefits of this approach do not require as many models or compute for training. In summary: • Audio pre-training learns general purpose speech representations useful for downstream tasks. • Audio pre-trained models are able to take advantage of task-specific training data e.g., surpassing prior unsupervised methods on a wide range of tasks, including the challenging LRS3 benchmark. |
输入上文 | 澜舟 LPA 生成结果 |
---|---|
In this work, we propose a reinforcement learning based model for networked multi-agent systems with dynamic networked interactions. | We use a deep neural network to represent the agent policies. The agent’s behaviors are the networked sequential interactions and we model this dynamic networked interaction as the reward. This dynamic reward model can be learned via backpropagation, without any prior knowledge on the networked system. The agent behaviors are then mapped to trajectories by the learned model. We use an actor-critic based deep reinforcement learning algorithm that can optimize trajectories towards the maximum reward. Finally, with the learned model we can simulate any agent behaviors for planning and learning in a real networked system. We use this model, along with the simulator to perform policy optimization and trajectory planning for a networked multi-agent system. Our results show that our approach can improve the overall outcomes, compared to both heuristic policy search algorithms and other popular planning and optimization heuristics such as A* or MM* for different classes of networks. |
NLP 领域 | |
---|---|
输入上文 | Making language models bigger does not inherently make them better at following a user’s intent. In this work, we proposed ChatGPT, |
输入关键词 | Transformer, reinforcement learning, human feedback |
澜舟 LPA 生成结果 | a new Transformer LM for dialogue, trained using reinforcement learning from human feedback. We demonstrate via human and automatic metrics that ChatGPT outperforms a strong conversational GPT2 baseline. The key factor behind this is the reward function which helps the model to learn better dialogues without relying on task-specific labels or train-inference mismatch. We show that the reward from even a small number of feedback iterations can be used in a semi-supervised framework to scale a dialogue model for production. |
Business Cooperation Email
Address
Floor 16, Fangzheng International Building, No. 52 Beisihuan West Road, Haidian District, Beijing, China.
Business Cooperation:
bd@langboat.com
Address:
Floor 16, Fangzheng International Building, No. 52 Beisihuan West Road, Haidian District, Beijing, China.
Official Accounts: