Skip to content
👋 Welcome! Feel free to register (verified or anonymous) and share your thoughts or story — your voice matters here! 🗣️💬
  • Official announcement from CSPaper.org

    1 2
    1 Topics
    2 Posts
    riverR
    Love it! seems pretty simple to use! Thanks!
  • 57 Topics
    204 Posts
    C
    @Joanne said in KDD 2025 2nd-round Review Results: How Did Your Paper Do?: KDD 2025 (February Cycle) – What the Score Patterns Reveal After combing through 22 self-reported results, three consistent patterns jump out: All-3’s are not lethal. Several papers with a flat 3-3 profile survived because nobody down-voted hard and the Area Chair (AC) was on their side. 4–2 vs 3–3 is still a coin-flip. A spiky 4–2 pair can trump steady 3–3s, yet clean consistency sometimes wins when the AC trusts uniform support. Reviewer kindness matters. A single upgrade (e.g., Technical 3 → 4) in the last round carried borderline submissions over the line. Who Actually Got In? – Mini Score Sheet Alias Final Mean (N / T) Earlier Lows Verdict author 1 #1 3.6 / 4.0 early 3-3-4 mix Accept author 1 #2 3.6 / 3.4 weaker T Accept author 2 ≈ 3.2 / 2.8 one reviewer gave 2 / 2 Accept — “kind-hearted AC” author 3 3.0 / 3.0 flat all-3’s Accept author 4 3.0 / 3.0 two negative votes (2 / 2) Accept author 5 3.4 / 4.0 T started 3-3-2-2-2 Accept — generous reviewer bumped T to 4 Messages from this Small Sample ≈ 3.0 averages can pass — the AC’s veto (positive or negative) is the real gatekeeper. One low score plus a confident critique can still sink you — numbers alone aren’t everything. Polite, point-by-point rebuttals can move scores, though not as often as we’d like. How's your scores? We will make a new pattern after you share with us your. Thanks for sharing! mine got rejected though -- mean T score 2.5-ish
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    4 4
    4 Topics
    4 Posts
    H
    Impl. based on nr0034je9.zip . Table A: Model Performance on NLP Benchmarks Model SST-2 (Acc) MNLI (Acc) QNLI (Acc) CoLA (Matthews) Avg Score BERT-Base 91.2 84.6 90.1 58.2 81.0 RoBERTa-Base 92.3 87.4 91.8 63.1 83.7 GPT-3 (175B) 94.1 88.9 93.0 66.4 85.6 Our Method 94.8 89.7 93.5 68.9 86.7 Table B: Ablation Study on Model Components (Evaluated on MNLI) Configuration Attention Mechanism Pretraining Corpus MNLI (Acc) Full Model Multi-head Self-Attn Custom + Public 89.7 – w/o Custom Corpus Multi-head Self-Attn Public Only 87.1 – w/o Attention Refinement Block Basic Self-Attn Custom + Public 86.5 – w/o Positional Embeddings Multi-head Self-Attn Custom + Public 85.2 – Random Initialization — — 72.4
Popular Tags