feat: V1.0.0

2025-10-21
-  add the shell history feature
-  add the logging feature
-  refactor the codebase for better maintainability
This commit is contained in:
DongShengWu 2025-10-21 19:36:27 +08:00
parent a63c1d0e25
commit 37a5d8d804
16 changed files with 511 additions and 277 deletions

5
CHANGELOG.md Normal file
View File

@ -0,0 +1,5 @@
# v1.0.0
2025-10-21
- add the shell history feature
- add the logging feature
- refactor the codebase for better maintainability

220
CLAUDE.md
View File

@ -4,96 +4,176 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
AutoTerminal is a smart terminal tool based on large language models (LLM) that converts natural language into terminal commands to improve work efficiency.
AutoTerminal is an LLM-powered terminal assistant that converts natural language into shell commands. It's a Python CLI tool that uses OpenAI-compatible APIs to generate and execute terminal commands based on user input, with context awareness from command history and current directory contents.
## Code Architecture
## Development Commands
### Package Management
```bash
# Install dependencies (development mode)
uv sync
# Install package locally for testing
pip install --user -e .
# Uninstall
pip uninstall autoterminal
```
autoterminal/
├── __init__.py # Package initialization
├── main.py # Main program entry point
├── config/ # Configuration management module
│ ├── __init__.py # Package initialization
│ ├── loader.py # Configuration loader
│ └── manager.py # Configuration manager
├── llm/ # LLM related modules
│ ├── __init__.py # Package initialization
│ └── client.py # LLM client
├── history/ # Command history management module
│ ├── __init__.py # Package initialization
│ └── history.py # History manager
├── utils/ # Utility functions
│ ├── __init__.py # Package initialization
│ └── helpers.py # Helper functions
### Running the Tool
```bash
# Using uv run (development)
uv run python autoterminal/main.py "your command request"
# After installation
at "your command request"
# With history context
at --history-count 5 "command based on previous context"
# Command recommendation mode (no input)
at
```
### Building and Distribution
```bash
# Build distribution packages
python -m build
# Upload to PyPI (requires twine)
twine upload dist/*
```
## Architecture
### Core Flow
1. **User Input** → CLI argument parsing (`main.py`)
2. **Configuration Loading** → ConfigLoader/ConfigManager retrieve user settings from `~/.autoterminal/config.json`
3. **Context Gathering** → HistoryManager fetches recent commands + glob current directory contents
4. **LLM Generation** → LLMClient sends prompt with context to OpenAI-compatible API
5. **Command Execution** → User confirms, then command executes via `os.system()`
6. **History Persistence** → Executed command saved to `~/.autoterminal/history.json`
### Key Components
1. **Main Entry Point** (`autoterminal/main.py`):
- Parses command-line arguments
- Loads configuration
- Initializes LLM client
- Generates and executes commands
- Manages command history
**autoterminal/main.py**
- Entry point for `at` command
- Argument parsing and orchestration
- Two modes: command generation (with user input) and recommendation mode (without input)
- Uses `glob.glob("*")` to gather current directory context
2. **Configuration Management** (`autoterminal/config/`):
- `loader.py`: Loads configuration from file
- `manager.py`: Manages configuration saving, validation, and initialization
**autoterminal/llm/client.py**
- `LLMClient` class wraps OpenAI API
- `generate_command()` constructs prompts with history and directory context
- Two prompt modes: default (user command) and recommendation (auto-suggest)
- System prompt includes recent command history and current directory files
3. **LLM Integration** (`autoterminal/llm/client.py`):
- Wraps OpenAI API client
- Generates terminal commands from natural language input
- Incorporates context from command history and current directory
**autoterminal/config/**
- `ConfigLoader`: Reads from `~/.autoterminal/config.json`
- `ConfigManager`: Interactive setup wizard, validation, and persistence
- Required fields: `api_key`, `base_url`, `model`
- Optional: `max_history`, `default_prompt`, `recommendation_prompt`
4. **Command History** (`autoterminal/history/history.py`):
- Manages command history storage and retrieval
- Provides context for LLM generation
**autoterminal/history/**
- `HistoryManager`: Persists commands to `~/.autoterminal/history.json`
- Stores: `timestamp`, `user_input`, `generated_command`, `executed` flag
- `get_last_executed_command()` prevents duplicate recommendations
5. **Utilities** (`autoterminal/utils/helpers.py`):
- Provides helper functions like command cleaning
**autoterminal/utils/helpers.py**
- `clean_command()`: Strips quotes and whitespace from LLM output
- `get_shell_history(count)`: Reads shell history from `~/.bash_history` or `~/.zsh_history`
- Filters sensitive commands (password, key, token, etc.)
- Handles zsh extended history format
- Deduplicates commands (keeps last occurrence)
- Returns up to N most recent commands
## Common Development Commands
**autoterminal/utils/logger.py**
- Configures `loguru` for structured logging
- Console output to stderr (colored, configurable via `AUTOTERMINAL_LOG_LEVEL` env var)
- File output to `~/.autoterminal/autoterminal.log` (rotation: 10MB, retention: 7 days)
- Logging levels: ERROR for console (default, production mode), DEBUG for file
- Enable verbose console logging: `AUTOTERMINAL_LOG_LEVEL=INFO` or `DEBUG`
- Disable file logging: `AUTOTERMINAL_FILE_LOG=false`
### Installation
### Configuration Storage
All user data lives in `~/.autoterminal/`:
- `config.json`: API credentials and settings
- `history.json`: Command execution history
- `autoterminal.log`: Application logs (rotated at 10MB, compressed)
Using uv (development mode):
```bash
uv sync
```
### Installation Mechanism
`pyproject.toml` defines:
- Entry point: `at = "autoterminal.main:main"`
- Dependencies: `openai>=1.0.0`, `loguru>=0.7.0`
- Python requirement: `>=3.10`
- Build system: setuptools
Using pip (user installation):
```bash
pip install --user .
```
## Important Implementation Details
### Running the Application
### Context-Aware Command Generation
The LLM receives four critical context inputs:
1. **Command History**: Last N commands from AutoTerminal (user input + generated command pairs)
2. **Directory Contents**: Output of `glob.glob("*")` in current working directory
3. **Shell History**: Last 20 commands from user's shell history (bash/zsh) via `get_shell_history()`
4. **Last Executed Command**: Used in recommendation mode to avoid repeats
Using uv run:
```bash
uv run python autoterminal/main.py "list all files in current directory"
```
### Two Operating Modes
1. **Command Generation Mode** (`at "do something"`):
- Uses `default_prompt` from config
- Generates command from user's natural language request
After installation, use the `at` command:
```bash
at "list all files in current directory"
```
2. **Recommendation Mode** (`at` with no args):
- Uses `recommendation_prompt` from config
- Analyzes context to suggest next likely command
- Returns empty string if context insufficient
- Special logic to avoid recommending `echo` commands for listing files
Using history context:
```bash
at --history-count 5 "based on previous commands, delete all .txt files"
```
### Safety Mechanism
Commands are always displayed before execution with "Press Enter to execute..." prompt. User must explicitly confirm (Enter key) before execution via `os.system()`.
### Development
### Command Cleaning
LLM output is processed through `clean_command()` to remove:
- Leading/trailing quotes (single or double)
- Excess whitespace
- Prevents common LLM wrapping artifacts
The project uses setuptools for packaging and distribution. Entry point is defined in both `pyproject.toml` and `setup.py`.
## Development Notes
## Key Features
### When modifying prompts:
- System prompts are in `config/manager.py` defaults and configurable via `config.json`
- Recommendation prompt explicitly instructs against `echo` for file listing
- Context is appended to system prompt, not injected into user message
- LLM-based intelligent command generation
- Secure command execution mechanism (requires user confirmation)
- Flexible configuration management
- Chinese language support
- Support for multiple LLM models (OpenAI GPT series and compatible APIs)
- Command history tracking and context awareness
- Current directory content context awareness
- Configurable history size
### When working with history:
- History is capped at `max_history` entries (default: 10)
- Stored in reverse chronological order
- `get_recent_history()` returns oldest-to-newest slice for context
### When extending LLM support:
- Client uses `openai` package with custom `base_url`
- Compatible with any OpenAI API-compatible service
- Temperature fixed at 0.1, max_tokens at 100 for deterministic short outputs
### Configuration initialization:
- First run triggers interactive setup wizard
- Config validation checks for all required keys
- Command-line args (`--api-key`, `--base-url`, `--model`) override config file
### Logging and Debugging:
- All modules use centralized `loguru` logger from `autoterminal.utils.logger`
- **Production mode**: Console only shows ERROR messages (default)
- **Debug mode**: `AUTOTERMINAL_LOG_LEVEL=DEBUG at "command"` shows all logs
- File logs: Always DEBUG level at `~/.autoterminal/autoterminal.log` (unless disabled)
- View logs: `tail -f ~/.autoterminal/autoterminal.log`
- Key events logged: config loading, LLM calls, command execution, history updates, shell history reading
### Shell History Integration:
- `get_shell_history()` automatically detects bash/zsh history files
- Detection strategy:
1. Tries `$HISTFILE` environment variable
2. Detects `$SHELL` to determine shell type (zsh/bash)
3. Prioritizes history files based on detected shell
4. Falls back to common locations (`~/.bash_history`, `~/.zsh_history`, etc.)
- Sensitive keyword filtering prevents leaking credentials
- Shell history provides additional context beyond AutoTerminal's own history
- Failure to read shell history is non-fatal (returns empty list with warning)

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) <year> Adam Veldhousen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -2,4 +2,4 @@ from . import config, llm, utils
from .history import HistoryManager
from .main import main
__all__ = ['config', 'llm', 'utils', 'HistoryManager', 'main']
__all__ = ['config', 'llm', 'utils', 'HistoryManager', 'main']

View File

@ -1,10 +1,12 @@
import os
import json
from typing import Dict, Optional, Any
from autoterminal.utils.logger import logger
class ConfigLoader:
"""配置加载器,支持从文件加载配置"""
def __init__(self, config_file: str = None):
if config_file is None:
# 从用户主目录下的.autoterminal目录中加载配置文件
@ -13,17 +15,22 @@ class ConfigLoader:
self.config_file = os.path.join(config_dir, "config.json")
else:
self.config_file = config_file
def load_from_file(self) -> Dict:
"""从配置文件加载配置"""
if os.path.exists(self.config_file):
try:
logger.debug(f"从文件加载配置: {self.config_file}")
with open(self.config_file, 'r', encoding='utf-8') as f:
return json.load(f)
config = json.load(f)
logger.info("配置文件加载成功")
return config
except Exception as e:
print(f"警告: 无法读取配置文件 {self.config_file}: {e}")
logger.error(f"无法读取配置文件 {self.config_file}: {e}")
else:
logger.debug(f"配置文件不存在: {self.config_file}")
return {}
def get_config(self) -> Dict:
"""获取配置"""
return self.load_from_file()

View File

@ -1,10 +1,12 @@
import os
import json
from typing import Dict, Any
from autoterminal.utils.logger import logger
class ConfigManager:
"""配置管理器,支持配置的保存和验证"""
def __init__(self, config_file: str = None):
if config_file is None:
# 将配置文件存储在用户主目录下的.autoterminal目录中
@ -15,7 +17,7 @@ class ConfigManager:
self.config_file = os.path.join(config_dir, "config.json")
else:
self.config_file = config_file
self.required_keys = ['api_key', 'base_url', 'model']
self.default_config = {
'base_url': 'https://api.openai.com/v1',
@ -23,34 +25,40 @@ class ConfigManager:
'default_prompt': '你现在是一个终端助手,用户输入想要生成的命令,你来输出一个命令,不要任何多余的文本!',
'max_history': 10
}
def save_config(self, config: Dict[str, Any]) -> bool:
"""保存配置到文件"""
try:
# 确保目录存在
os.makedirs(os.path.dirname(self.config_file) if os.path.dirname(self.config_file) else '.', exist_ok=True)
os.makedirs(
os.path.dirname(
self.config_file) if os.path.dirname(
self.config_file) else '.',
exist_ok=True)
logger.debug(f"保存配置到文件: {self.config_file}")
with open(self.config_file, 'w', encoding='utf-8') as f:
json.dump(config, f, indent=2, ensure_ascii=False)
logger.info("配置文件保存成功")
return True
except Exception as e:
print(f"错误: 无法保存配置文件 {self.config_file}: {e}")
logger.error(f"无法保存配置文件 {self.config_file}: {e}")
return False
def validate_config(self, config: Dict[str, Any]) -> bool:
"""验证配置是否完整"""
for key in self.required_keys:
if not config.get(key):
return False
return True
def initialize_config(self) -> Dict[str, Any]:
"""初始化配置向导"""
print("欢迎使用AutoTerminal配置向导")
print("请提供以下信息以完成配置:")
config = self.default_config.copy()
# 获取API密钥
try:
api_key = input("请输入您的API密钥: ").strip()
@ -64,10 +72,11 @@ class ConfigManager:
except Exception as e:
print(f"错误: 无法读取API密钥输入: {e}")
return {}
# 获取Base URL
try:
base_url = input(f"请输入Base URL (默认: {self.default_config['base_url']}): ").strip()
base_url = input(
f"请输入Base URL (默认: {self.default_config['base_url']}): ").strip()
if base_url:
config['base_url'] = base_url
except EOFError:
@ -75,7 +84,7 @@ class ConfigManager:
return {}
except Exception as e:
print(f"警告: 无法读取Base URL输入: {e}")
# 获取模型名称
try:
model = input(f"请输入模型名称 (默认: {self.default_config['model']}): ").strip()
@ -86,7 +95,7 @@ class ConfigManager:
return {}
except Exception as e:
print(f"警告: 无法读取模型名称输入: {e}")
# 保存配置
if self.save_config(config):
print(f"配置已保存到 {self.config_file}")
@ -94,7 +103,7 @@ class ConfigManager:
else:
print("配置保存失败")
return {}
def get_or_create_config(self) -> Dict[str, Any]:
"""获取现有配置或创建新配置"""
# 尝试从文件加载配置
@ -109,7 +118,7 @@ class ConfigManager:
else:
print("现有配置不完整")
except Exception as e:
print(f"警告: 无法读取配置文件 {self.config_file}: {e}")
logger.warning(f"无法读取配置文件 {self.config_file}: {e}")
# 如果配置不存在或不完整,启动初始化向导
return self.initialize_config()

View File

@ -1,73 +0,0 @@
import os
import json
from typing import List, Dict, Any
from datetime import datetime
class HistoryManager:
"""历史命令管理器,用于记录和检索命令历史"""
def __init__(self, history_file: str = None, max_history: int = 10):
if history_file is None:
# 将历史文件存储在用户主目录下的.autoterminal目录中
home_dir = os.path.expanduser("~")
config_dir = os.path.join(home_dir, ".autoterminal")
self.history_file = os.path.join(config_dir, "history.json")
else:
self.history_file = history_file
self.max_history = max_history
self.history = self.load_history()
def load_history(self) -> List[Dict[str, Any]]:
"""从历史文件加载命令历史"""
if os.path.exists(self.history_file):
try:
with open(self.history_file, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
print(f"警告: 无法读取历史文件 {self.history_file}: {e}")
return []
def save_history(self) -> bool:
"""保存命令历史到文件"""
try:
# 确保目录存在
os.makedirs(os.path.dirname(self.history_file) if os.path.dirname(self.history_file) else '.', exist_ok=True)
with open(self.history_file, 'w', encoding='utf-8') as f:
json.dump(self.history, f, indent=2, ensure_ascii=False)
return True
except Exception as e:
print(f"错误: 无法保存历史文件 {self.history_file}: {e}")
return False
def add_command(self, user_input: str, generated_command: str, executed: bool = True) -> None:
"""添加命令到历史记录"""
entry = {
"timestamp": datetime.now().isoformat(),
"user_input": user_input,
"generated_command": generated_command,
"executed": executed
}
self.history.append(entry)
# 保持历史记录在最大数量限制内
if len(self.history) > self.max_history:
self.history = self.history[-self.max_history:]
# 保存到文件
self.save_history()
def get_recent_history(self, count: int = None) -> List[Dict[str, Any]]:
"""获取最近的命令历史"""
if count is None:
count = self.max_history
return self.history[-count:] if self.history else []
def get_last_command(self) -> Dict[str, Any]:
"""获取最后一条命令"""
if self.history:
return self.history[-1]
return {}

View File

@ -1,4 +1,4 @@
# History module initialization
from .history import HistoryManager
__all__ = ['HistoryManager']
__all__ = ['HistoryManager']

View File

@ -2,10 +2,12 @@ import os
import json
from typing import List, Dict, Any
from datetime import datetime
from autoterminal.utils.logger import logger
class HistoryManager:
"""历史命令管理器,用于记录和检索命令历史"""
def __init__(self, history_file: str = None, max_history: int = 10):
if history_file is None:
# 将历史文件存储在用户主目录下的.autoterminal目录中
@ -14,67 +16,84 @@ class HistoryManager:
self.history_file = os.path.join(config_dir, "history.json")
else:
self.history_file = history_file
self.max_history = max_history
self.history = self.load_history()
def load_history(self) -> List[Dict[str, Any]]:
"""从历史文件加载命令历史"""
if os.path.exists(self.history_file):
try:
logger.debug(f"从文件加载历史: {self.history_file}")
with open(self.history_file, 'r', encoding='utf-8') as f:
return json.load(f)
history = json.load(f)
logger.info(f"加载了 {len(history)} 条历史记录")
return history
except Exception as e:
print(f"警告: 无法读取历史文件 {self.history_file}: {e}")
logger.error(f"无法读取历史文件 {self.history_file}: {e}")
else:
logger.debug(f"历史文件不存在: {self.history_file}")
return []
def save_history(self) -> bool:
"""保存命令历史到文件"""
try:
# 确保目录存在
os.makedirs(os.path.dirname(self.history_file) if os.path.dirname(self.history_file) else '.', exist_ok=True)
os.makedirs(
os.path.dirname(
self.history_file) if os.path.dirname(
self.history_file) else '.',
exist_ok=True)
logger.debug(f"保存历史到文件: {self.history_file}")
with open(self.history_file, 'w', encoding='utf-8') as f:
json.dump(self.history, f, indent=2, ensure_ascii=False)
logger.debug("历史文件保存成功")
return True
except Exception as e:
print(f"错误: 无法保存历史文件 {self.history_file}: {e}")
logger.error(f"无法保存历史文件 {self.history_file}: {e}")
return False
def add_command(self, user_input: str, generated_command: str, executed: bool = True) -> None:
def add_command(
self,
user_input: str,
generated_command: str,
executed: bool = True) -> None:
"""添加命令到历史记录"""
logger.debug(f"添加命令到历史: {generated_command}")
entry = {
"timestamp": datetime.now().isoformat(),
"user_input": user_input,
"generated_command": generated_command,
"executed": executed
}
self.history.append(entry)
# 保持历史记录在最大数量限制内
if len(self.history) > self.max_history:
self.history = self.history[-self.max_history:]
logger.debug(f"历史记录已截断到 {self.max_history}")
# 保存到文件
self.save_history()
def get_last_executed_command(self) -> str:
"""获取最后一条已执行的命令"""
for entry in reversed(self.history):
if entry.get("executed", False):
return entry.get("generated_command", "")
return ""
def get_recent_history(self, count: int = None) -> List[Dict[str, Any]]:
"""获取最近的命令历史"""
if count is None:
count = self.max_history
return self.history[-count:] if self.history else []
def get_last_command(self) -> Dict[str, Any]:
"""获取最后一条命令"""
if self.history:
return self.history[-1]
return {}
return {}

View File

@ -1,65 +1,86 @@
from openai import OpenAI
from typing import Dict, Any, Optional, List
import os
from autoterminal.utils.logger import logger
class LLMClient:
"""LLM客户端封装OpenAI API调用"""
def __init__(self, config: Dict[str, Any]):
self.config = config
logger.info("初始化 LLM 客户端")
logger.debug(f"使用模型: {config.get('model')}, Base URL: {config.get('base_url')}")
self.client = OpenAI(
api_key=config.get('api_key'),
base_url=config.get('base_url')
)
def generate_command(self, user_input: str, prompt: Optional[str] = None,
history: Optional[List[Dict[str, Any]]] = None,
current_dir_content: Optional[List[str]] = None,
last_executed_command: str = "") -> str:
def generate_command(self, user_input: str, prompt: Optional[str] = None,
history: Optional[List[Dict[str, Any]]] = None,
current_dir_content: Optional[List[str]] = None,
shell_history: Optional[List[str]] = None,
last_executed_command: str = "") -> str:
"""根据用户输入生成命令"""
# 根据用户输入是否为空选择不同的提示词
if not user_input:
if not prompt:
prompt = self.config.get('recommendation_prompt', '你现在是一个终端助手,根据上下文自动推荐命令:当用户没有输入时,基于最近执行的命令历史和当前目录内容,智能推荐最可能需要的终端命令(仅当有明确上下文线索时);当用户输入命令需求时,生成对应命令。仅输出纯命令文本,不要任何解释或多余内容!')
prompt = self.config.get(
'recommendation_prompt',
'你现在是一个终端助手,根据上下文自动推荐命令:当用户没有输入时,基于最近执行的命令历史和当前目录内容,智能推荐最可能需要的终端命令(仅当有明确上下文线索时);当用户输入命令需求时,生成对应命令。仅输出纯命令文本,不要任何解释或多余内容!')
else:
if not prompt:
prompt = self.config.get('default_prompt', '你现在是一个终端助手,用户输入想要生成的命令,你来输出一个命令,不要任何多余的文本!')
prompt = self.config.get(
'default_prompt',
'你现在是一个终端助手,用户输入想要生成的命令,你来输出一个命令,不要任何多余的文本!')
# 构建系统提示,包含上下文信息
system_prompt = prompt
# 添加历史命令上下文
if history:
history_context = "\n最近执行的命令历史:\n"
for i, entry in enumerate(reversed(history), 1):
history_context += f"{i}. 用户输入: {entry.get('user_input', '')} -> 生成命令: {entry.get('generated_command', '')}\n"
system_prompt += history_context
# 添加当前目录内容上下文
if current_dir_content:
dir_context = "\n当前目录下的文件和文件夹:\n" + "\n".join(current_dir_content)
system_prompt += dir_context
# 添加系统 Shell 历史上下文
if shell_history:
shell_context = "\n系统Shell最近执行的命令:\n"
for i, cmd in enumerate(shell_history, 1):
shell_context += f"{i}. {cmd}\n"
system_prompt += shell_context
# 当用户输入为空时,使用特殊的提示来触发推荐模式
if not user_input:
user_content = f"根据提供的上下文信息推荐一个最可能需要的终端命令仅当有明确的上下文线索时。如果上下文信息不足以确定一个有用的命令则返回空。请直接返回一个可执行的终端命令不要包含任何解释或其他文本。例如ls -la 或 git status。特别注意不要使用echo命令来列出文件应该使用ls命令。推荐命令时请考虑最近执行的命令历史避免重复推荐相同的命令。最后执行的命令是: {last_executed_command}。如果当前目录有pyproject.toml或setup.py文件可以考虑使用pip list查看已安装的包。"
else:
user_content = user_input
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_content}
]
try:
logger.info(f"调用 LLM 生成命令,用户输入: '{user_input if user_input else '(推荐模式)'}'")
logger.debug(f"系统提示长度: {len(system_prompt)} 字符")
response = self.client.chat.completions.create(
model=self.config.get('model'),
messages=messages,
temperature=0.1,
max_tokens=100
)
command = response.choices[0].message.content.strip()
logger.info(f"LLM 返回命令: '{command}'")
return command
except Exception as e:
logger.error(f"LLM调用失败: {str(e)}")
raise Exception(f"LLM调用失败: {str(e)}")

View File

@ -9,11 +9,15 @@ import glob
from autoterminal.config.loader import ConfigLoader
from autoterminal.config.manager import ConfigManager
from autoterminal.llm.client import LLMClient
from autoterminal.utils.helpers import clean_command
from autoterminal.utils.helpers import clean_command, get_shell_history
from autoterminal.history import HistoryManager
from autoterminal.utils.logger import logger
def main():
"""主程序入口"""
logger.info("AutoTerminal 启动")
# 解析命令行参数
parser = argparse.ArgumentParser(description='AutoTerminal - 智能终端工具')
parser.add_argument('user_input', nargs='*', help='用户输入的自然语言命令')
@ -21,16 +25,18 @@ def main():
parser.add_argument('--base-url', help='Base URL')
parser.add_argument('--model', help='模型名称')
parser.add_argument('--history-count', type=int, help='历史命令数量')
args = parser.parse_args()
# 合并用户输入
user_input = ' '.join(args.user_input).strip()
logger.debug(f"用户输入: '{user_input}'")
# 加载配置
logger.debug("加载配置文件")
config_loader = ConfigLoader()
config = config_loader.get_config()
# 命令行参数优先级最高
if args.api_key:
config['api_key'] = args.api_key
@ -38,121 +44,134 @@ def main():
config['base_url'] = args.base_url
if args.model:
config['model'] = args.model
# 获取历史命令数量配置
history_count = args.history_count or config.get('max_history', 10)
# 如果配置不完整,使用配置管理器初始化
config_manager = ConfigManager()
if not all([config.get('api_key'), config.get('base_url'), config.get('model')]):
config = config_manager.get_or_create_config()
if not config:
print("错误: 缺少必要的配置参数请通过命令行参数或配置文件提供API密钥、Base URL和模型名称。")
logger.error("缺少必要的配置参数请通过命令行参数或配置文件提供API密钥、Base URL和模型名称。")
return 1
# 如果有命令行参数输入,直接处理
if user_input:
# 初始化历史管理器
history_manager = HistoryManager(max_history=history_count)
# 获取历史命令
history = history_manager.get_recent_history(history_count)
# 获取当前目录内容
try:
current_dir_content = glob.glob("*")
except Exception as e:
print(f"警告: 无法获取当前目录内容: {e}")
logger.warning(f"无法获取当前目录内容: {e}")
current_dir_content = []
# 获取系统 Shell 历史
shell_history = get_shell_history() # 使用默认值 20
# 初始化LLM客户端
try:
llm_client = LLMClient(config)
except Exception as e:
print(f"LLM客户端初始化失败: {e}")
logger.error(f"LLM客户端初始化失败: {e}")
return 1
# 调用LLM生成命令
try:
generated_command = llm_client.generate_command(
user_input=user_input,
history=history,
current_dir_content=current_dir_content
current_dir_content=current_dir_content,
shell_history=shell_history
)
cleaned_command = clean_command(generated_command)
# 优化输出格式
print(f"\033[1;32m$\033[0m {cleaned_command}")
print("\033[1;37mPress Enter to execute...\033[0m")
# 等待用户回车确认执行
try:
input()
# 在用户的环境中执行命令
logger.info(f"执行命令: {cleaned_command}")
os.system(cleaned_command)
# 记录到历史
history_manager.add_command(user_input, cleaned_command)
logger.debug("命令已添加到历史记录")
except EOFError:
print("\n输入已取消。")
return 0
except Exception as exec_e:
print(f"命令执行失败: {exec_e}")
logger.error(f"命令执行失败: {exec_e}")
return 1
except Exception as e:
print(f"命令生成失败: {e}")
logger.error(f"命令生成失败: {e}")
return 1
return 0
else:
# 处理空输入情况 - 生成基于上下文的推荐命令
history_manager = HistoryManager(max_history=history_count)
history = history_manager.get_recent_history(history_count)
try:
current_dir_content = glob.glob("*")
except Exception as e:
print(f"警告: 无法获取当前目录内容: {e}")
logger.warning(f"无法获取当前目录内容: {e}")
current_dir_content = []
# 获取系统 Shell 历史
shell_history = get_shell_history() # 使用默认值 20
try:
llm_client = LLMClient(config)
except Exception as e:
print(f"LLM客户端初始化失败: {e}")
logger.error(f"LLM客户端初始化失败: {e}")
return 1
# 获取最后执行的命令以避免重复推荐
last_executed_command = history_manager.get_last_executed_command()
try:
recommendation = llm_client.generate_command(
user_input="",
history=history,
current_dir_content=current_dir_content,
shell_history=shell_history,
last_executed_command=last_executed_command
)
cleaned_recommendation = clean_command(recommendation)
if cleaned_recommendation.strip():
print(f"\033[1;34m💡 建议命令:\033[0m {cleaned_recommendation}")
print("\033[1;37mPress Enter to execute, or Ctrl+C to cancel...\033[0m")
try:
input()
logger.info(f"执行推荐命令: {cleaned_recommendation}")
os.system(cleaned_recommendation)
history_manager.add_command("自动推荐", cleaned_recommendation)
logger.debug("推荐命令已添加到历史记录")
except EOFError:
print("\n输入已取消。")
return 0
except Exception as exec_e:
print(f"命令执行失败: {exec_e}")
logger.error(f"命令执行失败: {exec_e}")
return 1
else:
print("没有找到相关的命令建议。")
except Exception as e:
print(f"命令推荐生成失败: {e}")
logger.error(f"命令推荐生成失败: {e}")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,3 +1,8 @@
import os
from typing import List
from autoterminal.utils.logger import logger
def clean_command(command: str) -> str:
"""清理命令字符串"""
# 移除可能的引号和多余空格
@ -7,3 +12,96 @@ def clean_command(command: str) -> str:
if command.startswith("'") and command.endswith("'"):
command = command[1:-1]
return command.strip()
def get_shell_history(count: int = 20) -> List[str]:
"""
获取系统 Shell 历史命令
Args:
count: 获取最近的命令数量
Returns:
最近执行的 Shell 命令列表
"""
history_commands = []
try:
# 尝试从环境变量获取历史文件路径
histfile = os.getenv('HISTFILE')
# 如果没有 HISTFILE根据 SHELL 推断
if not histfile or not os.path.exists(histfile):
home_dir = os.path.expanduser("~")
shell = os.getenv('SHELL', '')
# 根据当前 Shell 类型优先尝试对应的历史文件
possible_files = []
if 'zsh' in shell:
possible_files = [
os.path.join(home_dir, ".zsh_history"),
os.path.join(home_dir, ".zhistory"),
os.path.join(home_dir, ".bash_history"),
]
else: # bash 或其他
possible_files = [
os.path.join(home_dir, ".bash_history"),
os.path.join(home_dir, ".zsh_history"),
os.path.join(home_dir, ".zhistory"),
]
for file_path in possible_files:
if os.path.exists(file_path):
histfile = file_path
break
if histfile and os.path.exists(histfile):
logger.debug(f"读取 Shell 历史文件: {histfile}")
with open(histfile, 'r', encoding='utf-8', errors='ignore') as f:
lines = f.readlines()
# 过滤和清理命令
for line in lines:
line = line.strip()
# 跳过空行
if not line:
continue
# 处理 zsh 扩展历史格式 (: timestamp:duration;command)
if line.startswith(':'):
parts = line.split(';', 1)
if len(parts) > 1:
line = parts[1].strip()
# 过滤敏感命令(包含密码、密钥等)
sensitive_keywords = [
'password',
'passwd',
'secret',
'key',
'token',
'api_key',
'api-key']
if any(keyword in line.lower() for keyword in sensitive_keywords):
continue
# 过滤重复命令(保持顺序,只保留最后一次出现)
if line in history_commands:
history_commands.remove(line)
history_commands.append(line)
# 返回最近的 N 条命令
result = history_commands[-count:] if len(
history_commands) > count else history_commands
logger.debug(f"成功获取 {len(result)} 条 Shell 历史命令")
return result
else:
logger.warning("未找到 Shell 历史文件")
return []
except Exception as e:
logger.warning(f"获取 Shell 历史失败: {e}")
return []

View File

@ -0,0 +1,38 @@
import os
import sys
from loguru import logger
# 移除默认的 handler
logger.remove()
# 获取日志级别(从环境变量或默认为 ERROR正式使用时只显示错误
log_level = os.getenv("AUTOTERMINAL_LOG_LEVEL", "ERROR")
# 添加控制台输出stderr- 默认只显示错误
logger.add(
sys.stderr,
format="<level>{level}: {message}</level>",
level=log_level,
colorize=True
)
# 添加文件输出(可选,存储在 ~/.autoterminal/ 目录)
enable_file_logging = os.getenv("AUTOTERMINAL_FILE_LOG", "true").lower() != "false"
if enable_file_logging:
home_dir = os.path.expanduser("~")
log_dir = os.path.join(home_dir, ".autoterminal")
os.makedirs(log_dir, exist_ok=True)
log_file = os.path.join(log_dir, "autoterminal.log")
logger.add(
log_file,
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {name}:{function}:{line} - {message}",
level="DEBUG", # 文件记录所有级别的日志
rotation="10 MB", # 日志文件达到 10MB 时轮转
retention="7 days", # 保留最近 7 天的日志
compression="zip", # 压缩旧日志
encoding="utf-8"
)
# 导出 logger 供其他模块使用
__all__ = ["logger"]

View File

@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "autoterminal"
version = "0.1.1"
version = "1.0.1"
description = "智能终端工具基于LLM将自然语言转换为终端命令(create by claude 4 sonnet)"
readme = "README.md"
requires-python = ">=3.10"
@ -13,9 +13,9 @@ authors = [
{name = "wds2dxh", email = "wdsnpshy@163.com"}
]
license = {text = "MIT"}
keywords = ["terminal", "ai", "llm", "command-line", "automation"]
classifiers = [
"Development Status :: 4 - Beta",
keywords = ["terminal", "ai", "llm", "command-line", "automation", "autoterminal"]
classifiers = [
"Development Status :: 6 - Mature",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
@ -28,7 +28,8 @@ classifiers = [
"Topic :: Utilities",
]
dependencies = [
"openai>=1.0.0"
"openai>=1.0.0",
"loguru>=0.7.0"
]
[project.urls]

View File

@ -1,37 +0,0 @@
from setuptools import setup, find_packages
setup(
name="autoterminal",
version="0.1.1",
description="智能终端工具基于LLM将自然语言转换为终端命令(create by claude 4 sonnet)",
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
author="wds",
author_email="wdsnpshy@163.com",
url="http://cloud-home.dxh-wds.top:20100/w/AutoTerminal",
license="MIT",
packages=find_packages(),
install_requires=[
"openai>=1.0.0",
],
entry_points={
'console_scripts': [
'at=autoterminal.main:main',
],
},
python_requires='>=3.10',
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: System :: Systems Administration",
"Topic :: Utilities",
],
keywords=["terminal", "ai", "llm", "command-line", "automation"],
)

28
uv.lock generated
View File

@ -31,6 +31,7 @@ name = "autoterminal"
version = "0.1.1"
source = { editable = "." }
dependencies = [
{ name = "loguru" },
{ name = "openai" },
]
@ -40,7 +41,10 @@ dev = [
]
[package.metadata]
requires-dist = [{ name = "openai", specifier = ">=1.0.0" }]
requires-dist = [
{ name = "loguru", specifier = ">=0.7.0" },
{ name = "openai", specifier = ">=1.0.0" },
]
[package.metadata.requires-dev]
dev = [{ name = "twine", specifier = ">=6.1.0" }]
@ -449,6 +453,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d3/32/da7f44bcb1105d3e88a0b74ebdca50c59121d2ddf71c9e34ba47df7f3a56/keyring-25.6.0-py3-none-any.whl", hash = "sha256:552a3f7af126ece7ed5c89753650eec89c7eaae8617d0aa4d9ad2b75111266bd", size = 39085, upload-time = "2024-12-25T15:26:44.377Z" },
]
[[package]]
name = "loguru"
version = "0.7.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "win32-setctime", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3a/05/a1dae3dffd1116099471c643b8924f5aa6524411dc6c63fdae648c4f1aca/loguru-0.7.3.tar.gz", hash = "sha256:19480589e77d47b8d85b2c827ad95d49bf31b0dcde16593892eb51dd18706eb6", size = 63559, upload-time = "2024-12-06T11:20:56.608Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0c/29/0348de65b8cc732daa3e33e67806420b2ae89bdce2b04af740289c5c6c8c/loguru-0.7.3-py3-none-any.whl", hash = "sha256:31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c", size = 61595, upload-time = "2024-12-06T11:20:54.538Z" },
]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
@ -816,6 +833,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795, upload-time = "2025-06-18T14:07:40.39Z" },
]
[[package]]
name = "win32-setctime"
version = "1.2.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b3/8f/705086c9d734d3b663af0e9bb3d4de6578d08f46b1b101c2442fd9aecaa2/win32_setctime-1.2.0.tar.gz", hash = "sha256:ae1fdf948f5640aae05c511ade119313fb6a30d7eabe25fef9764dca5873c4c0", size = 4867, upload-time = "2024-12-07T15:28:28.314Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e1/07/c6fe3ad3e685340704d314d765b7912993bcb8dc198f0e7a89382d37974b/win32_setctime-1.2.0-py3-none-any.whl", hash = "sha256:95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390", size = 4083, upload-time = "2024-12-07T15:28:26.465Z" },
]
[[package]]
name = "zipp"
version = "3.23.0"