We have hosted the application cpt in order to run this application in our online workstations with Wine or directly.
Quick description about cpt:
A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation. We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.Position Embeddings We extend the max_position_embeddings from 512 to 1024. We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1. Aiming to unify both NLU and NLG tasks, We propose a novel Chinese Pre-trained Un-balanced Transformer (CPT).
Features:
- This repository contains code and checkpoints for CPT
- A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
- Transformer encoder with fully-connected self-attention, which is designed to capture the common semantic representation for both language understanding and generation
- Generation Decoder (G-Dec)
- Shallow Transformer encoder with fully-connected self-attention, which is designed for NLU tasks. The input of U-Dec is the output of S-Enc.
- G-Dec utilizes the output of S-Enc with cross-attention
Programming Language: Python.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.