Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: CyCle1024/lmdeploy
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: InternLM/lmdeploy
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 19 commits
  • 199 files changed
  • 11 contributors

Commits on Mar 17, 2025

  1. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    1fd1f32 View commit details
  2. Fix the bug for reading dict error (InternLM#3196)

    * Update qwen2.py
    
    * Update mllama.py
    
    fix the bug for reading dict
    
    * Update qwen2_vl.py
    
    fix the bug for reading dict
    
    * fix qwen2_5_vl.py readdict error
    
    ---------
    
    Co-authored-by: zxy <[email protected]>
    GxjGit and CUHKSZzxy authored Mar 17, 2025
    Copy the full SHA
    9958b89 View commit details

Commits on Mar 18, 2025

  1. docs: update ascend docs for docker running (InternLM#3266)

    * docs: update ascend docs for docker running
    
    * ci: fix mdformat linting
    CyCle1024 authored Mar 18, 2025
    Copy the full SHA
    9bff3a7 View commit details
  2. Copy the full SHA
    d95ecc0 View commit details
  3. Fix get ppl (InternLM#3268)

    * fix get_ppl
    
    * remove useless code
    
    * remove debug logs
    lvhan028 authored Mar 18, 2025
    Copy the full SHA
    028b94c View commit details
  4. Add gemma3 (InternLM#3272)

    * Add gemma3 text model
    
    * Add gemma vl
    
    * update doc
    
    * add tp
    
    * fix doc
    
    * readmes
    AllentDan authored Mar 18, 2025
    Copy the full SHA
    7c33db5 View commit details

Commits on Mar 19, 2025

  1. bump version to v0.7.2 (InternLM#3252)

    * bump version to v0.7.2
    
    * bump version to v0.7.2
    
    * remote print
    lvhan028 authored Mar 19, 2025
    Copy the full SHA
    6f1277e View commit details

Commits on Mar 20, 2025

  1. fix activation grid oversize (InternLM#3282)

    * fix activation grid oversize
    
    * optimize
    
    * fix quant fp8
    grimoire authored Mar 20, 2025
    Copy the full SHA
    1e77ed2 View commit details
  2. Add spaces_between_special_tokens to /v1/interactive and make compati…

    …ble with empty text (InternLM#3283)
    
    * add spaces between special token to interactive endpoint
    
    * empty input
    AllentDan authored Mar 20, 2025
    Copy the full SHA
    9f211a8 View commit details

Commits on Mar 21, 2025

  1. add env var to control timeout (InternLM#3291)

    * add env var to control timeout
    
    * update
    
    * update
    
    * fix lint
    CUHKSZzxy authored Mar 21, 2025
    Copy the full SHA
    da0bf7b View commit details
  2. Copy the full SHA
    a2c38da View commit details
  3. Copy the full SHA
    81c815e View commit details
  4. refactor attn param (InternLM#3164)

    * refactor attn param
    
    * fix lint
    
    * fix build
    
    * fix ut
    
    * use creator to create rope_param
    
    * reuse parse func
    
    * fix ut
    
    * fix comments
    
    * update name
    
    * fix dynamic
    
    * fix deepseekv2 yarn
    
    * use single dataclass
    
    * fix loading workspace model
    irexyc authored Mar 21, 2025
    Copy the full SHA
    82d0a90 View commit details

Commits on Mar 22, 2025

  1. Torch dp support (InternLM#3207)

    * better dist context
    
    * can not exit
    
    * multinode support
    
    * better exception
    
    * refactor
    
    * fix local rank
    
    * replace group
    
    * fix dist
    
    * remove useless code
    
    * remove finish flag
    
    * refactor engine and model agent
    
    * uni executor
    
    * wip
    
    * tp
    
    * fix
    
    * less async
    
    * circle buf
    
    * event per block
    
    * fast mp
    
    * fix error handler
    
    * remove safe wait
    
    * context in model agent
    
    * fix on stop
    
    * check before init
    
    * fix tp close
    
    * ray ver0
    
    * fix close
    
    * fix remote code
    
    * optimize ray
    
    * better checker and logger
    
    * pack tensor
    
    * auto check dist
    
    * fix mp gloo
    
    * add timer tools
    
    * better scheduler
    
    * fix mp hang
    
    * fix mp
    
    * fix chat
    
    * less output
    
    * merge main
    
    * optimize ray get output
    
    * remove nsight runtime env
    
    * dag
    
    * optimize mp & lint
    
    * optimize mp
    
    * add base workerwrapper
    
    * fix gather, update flags
    
    * better return mask
    
    * add choice
    
    * enable mp,ray with worldsize=1
    
    * fix mp exit
    
    * fix mp vlm
    
    * chat exit
    
    * add docs
    
    * lint
    
    * doc
    
    * dp check
    
    * fix blocked fp8 moe
    
    * remove mask
    
    * support dp, async
    
    * remove debug line
    
    * fix model tp
    
    * support sync execute
    
    * fix chat stopwords
    
    * refactor chat
    
    * add warmup
    
    * disable warmup
    
    * dp support
    
    * fix ut, merge main, force eager
    
    * support qwen2/internlm2/internlm3
    
    * support blocked fp8 all gather
    
    * add more model support
    
    * fix exit
    
    * fix merge
    
    * fix sync long context
    
    * support process group on ray
    
    * change dp master addr and master port
    
    * update log level
    
    * support moe tp
    
    * fix tp1 dp2
    
    * fix
    
    * fix
    
    * wait handle
    
    * remove flag
    
    * ensure execute order
    
    * remove import
    
    * add serve args
    
    * force eager
    grimoire authored Mar 22, 2025
    Copy the full SHA
    f6e7ec7 View commit details
  2. Add deep gemm with tma pre allocated (InternLM#3287)

    * add deep gemm with tma pre allocated
    
    * add comment
    
    * add comment
    
    * dispatch
    
    * no use_deep_gemm arg
    
    * remove DeepGemmBlockedF8
    
    * missed op type
    
    * latest get_best_config
    
    * add a line of debug
    AllentDan authored Mar 22, 2025
    Copy the full SHA
    e37a76d View commit details
  3. Copy the full SHA
    8f68177 View commit details

Commits on Mar 24, 2025

  1. [ci] add think function testcase (InternLM#3299)

    * update
    
    * update
    
    * update
    
    * update timeout
    
    * update
    
    * update
    
    * update
    
    * update
    
    * updaste
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    ---------
    
    Co-authored-by: zhulinJulia24 <[email protected]>
    zhulinJulia24 and WillowsZhu authored Mar 24, 2025
    Copy the full SHA
    2774f8e View commit details
  2. Add mixed DP + TP (InternLM#3229)

    * comm abstraction
    
    * add custom
    
    * fused rms norm
    
    * refactor
    
    * push-based kernel
    
    * optimize for small hidden dims
    
    * integration
    
    * clean up
    
    * export options & fix things
    
    * allgather2d & VMM allocation
    
    * optimize allgather2d
    
    * remove obsolete comm utils
    
    * handle non-multi-gpu build
    
    * fix lint
    
    * fix lint
    
    * avoid using mscclpp repo (some deps are not needed)
    
    * fix lint
    
    * fix nccl version & clean up deps
    
    * fix lint
    
    * custom -> native
    
    * rename
    
    * fix p-lora
    
    * fix lm head
    
    * log fatal exception explicitly
    
    * initial data parallel support
    
    * sync dp + tp
    
    * mixed `d*t0 | t1`
    
    * refactor
    
    * refactor
    
    * fix ut
    
    * refactor
    
    * asymmetrical allreduce
    
    * fix
    
    * fix nccl<2.18
    
    * fix
    
    * fix
    
    * fix
    
    * fix
    
    * fix converter
    
    * fix converter
    
    * fix tp for converted model
    
    * fix tp for converted model
    
    * assert tp size loading from workspace
    lzhangzz authored Mar 24, 2025
    Copy the full SHA
    63b13e8 View commit details
  3. Copy the full SHA
    8b12b4d View commit details
Loading