SenticGCNTrainArgs

class SenticGCNTrainArgs(senticnet_word_file_path: str = './senticNet/senticnet_word.txt', save_preprocessed_senticnet: str = True, saved_preprocessed_senticnet_file_path: str = 'senticnet/senticnet.pickle', spacy_pipeline: str = 'en_core_web_sm', word_vec_file_path: str = 'glove/glove.840B.300d.txt', dataset_train: list = <factory>, dataset_test: list = <factory>, valset_ratio: float = 0.0, model: str = 'senticgcn', save_best_model: bool = True, save_model_path: str = 'senticgcn', tokenizer: str = 'senticgcn_tokenizer', train_tokenizer: bool = False, save_tokenizer: bool = False, save_tokenizer_path: str = 'senticgcn_tokenizer', embedding_model: str = 'senticgcn_embed_model', build_embedding_model: bool = False, save_embedding_model: bool = False, save_embedding_model_path: str = 'senticgcn_embed_model', save_results: bool = True, save_results_folder: str = 'results', initializer: str = 'xavier_uniform_', optimizer: str = 'adam', loss_function: str = 'cross_entropy', learning_rate: float = 0.001, l2reg: float = 1e-05, epochs: int = 100, batch_size: int = 16, log_step: int = 5, embed_dim: int = 300, hidden_dim: int = 300, dropout: float = 0.3, polarities_dim: int = 3, seed: int = 776, device: str = 'cuda', repeats: int = 10, patience: int = 5, max_len: int = 85, eval_args: Dict[str, Any] = <factory>)[source]

Data class for training config for both SenticGCNModel and SenticGCNBertModel