Source code:
tgn-adpt.zip
magicparrots
Posts
-
KDD2025-2nd-tgn-adapted: anonymous source code for review only -
KDD 2025 2nd-round Review Results: How Did Your Paper Do?A data point:
GNN work, got
Novelty: 3, 2, 2, 3, 2
Technical Quality: 2, 2, 2, 2, 2
Confidence: 3, 4, 3, 4, 4Need to rebuttal? anyone knows more? 2 weeks challenge ahead!
-
πͺ ACL 2025 Reviews Are Out: To Rebut or Flee? The Harsh Reality of NLPβs "Publish or Perish" CircusContinued sample scores from zhihu
π© Borderline to Promising Scores
-
4 / 3 / 3 (Confidence: 3 / 4 / 4)
Hoping for Main Conference acceptance.
-
4 / 3 / 2.5 (Confidence: 3 / 4 / 4)
Reviewer hinted score could increase after rebuttal.
-
3.5 / 3.5 / 3 (Meta: 3)
For December submission. Ask: is that enough for Findings?
-
3.5 / 3.5 / 2.5 (Confidence: 3 / 3 / 4)
Author in the middle of a tough rebuttal. Main may be ambitious.
-
3.5 / 3 / 2.5
Open question: what's the chance for Findings?
-
3 / 3 / 2.5 (Confidence: 3 / 3 / 3)
Undergraduate author. Aims for Findings. Rebuttal will clarify reproducibility.
-
3.5 / 2.5 / 2.5 (Confidence: 3 / 2 / 4)
Community sees this as borderline.
π¨ Mediocre / Mixed Outcomes
-
2 / 3 / 4
One reviewer bumped the score after 6 minutes (!), but still borderline overall.
-
2 / 2.5 / 4
Rebuttal effort was made, but one reviewer already dropped. Probably withdrawn.
-
2 / 3.5 / 4
Surprisingly higher for a paper the author didnβt expect to succeed.
π₯ Weak or Rejected Outcomes
-
4 / 1.5 / 2 (Confidence: 5 / 3 / 4)
Likely no chance. Community reaction: βIs it time to give up?β
-
3 / 2.5 / 2.5 (Confidence: 3 / 3 / 5)
Rebuttal might help, but outlook is dim.
-
1 / 2.5, confidence 5
Probably a confused or low-effort review.
-
OA 1 / 1 / 1
A review like this existed (likely invalid). Community flagged it.
Additional comment from community
- Some reviewers are still clearly junior-level, or appear to use AI tools for review generation.
- Findings threshold widely believed to be OA β₯ 2.5β3.0, assuming some confidence in reviews.
- Review score inflation is low this round: average OA above 3.0 is rare, even among decent papers.
- Several December and February round submissions are said to be evaluated independently due to evolving meta-review policies.
οΈ Summary
- Score distributions reported in the Chinese community largely align with Redditβs (see my previous post), which is 3.0 is the magic number for Findings, 3.5+ needed for Main.
- Rebuttal might swing things, but expectations are tempered.
- Many junior researchers are actively sharing scores to gauge chances and strategize next steps (rebut, withdraw, or resubmit elsewhere).
-
-
πͺ ACL 2025 Reviews Are Out: To Rebut or Flee? The Harsh Reality of NLPβs "Publish or Perish" CircusSome review scores I have seen
Stronger or Mid-range Submissions
-
OA: 4/4/4, C : 2/2/2
Concern: Low confidence may hurt chances. -
OA: 4, 4, 2.5, C : 4, 4, 4
Community says likely for Findings. -
OA: 3, 3, 3, C : 5, 4, 4
Possibly borderline for Findings. -
OA Average: 3.38, Excitement: 3.625
Decent shot, though one reviewer gave 2.5. -
OA average: 3.33
Reported as the highest OA seen by one reviewer β suggests bar is low this cycle.
Weaker Submissions
-
OA: 2.5, 2.5, 1.5,
4, 3, 3
Unlikely to be accepted. -
OA: 2, 1.5, 2.5,
4, 4, 4
Most agree no chance for Findings. -
OA: 3, 3, 2.5,
4, 3, 4
Marginal; some optimism for Findings. -
Only two reviews, one with meaningless 1s and vague reasoning
ACs often unresponsive in such cases.
Some guessing from community
Findings Track:
- Informal threshold: OA β₯ 3.0
- Strong confidence and soundness can help borderline cases
Main Conference:
- Informal threshold: OA β₯ 3.5 to 4.0
- Very few reports of OA > 3.5 this cycle
Score Changes During Rebuttal:
- Rare but possible (e.g., 2 β 3)
- No transparency or reasoning shared
Info on review & rebuttal process
- Reviews were released gradually, not all at once
- Emergency reviews still being requested even after deadline
- Author response period extended by 2 days
- Confirmed via ACL 2025 website and ACL Rolling Review
- Meta-reviews and decisions expected April 15
To summarize
- This cycleβs review scores seem low overall
- OA 3.0 is a realistic bar for Findings track
- OA 3.5+ likely needed for Main conference
- First-time submitters often confused by lack of clear guidelines and inconsistent reviewer behavior
-
-
π₯ ICML 2025 Review Results are Coming! Fair or a Total Disaster? π€―
334 222 234 124 335
223 344 445 233 122 -
ICML2025-MainTrack-Submission#280-AllReviewersTable 1. Performance Comparison on Benchmark Datasets
Method CIFAR-10 (Acc %) CIFAR-100 (Acc %) TinyImageNet (Acc %) Params (M) FLOPs (G) ResNet-18 94.5 76.3 64.1 11.2 1.8 ViT-Small 95.2 77.9 65.7 21.7 4.6 Ours (GraphFormer) 96.1 79.5 67.3 19.8 3.9
Table 2. Ablation Study on Temporal Encoding
Method Variant CIFAR-10 (Acc %) TinyImageNet (Acc %) Ours (No Time Encoding) 95.4 66.1 Ours (Sinusoidal Only) 95.8 66.8 Ours (Learnable Time) 96.1 67.3
Table 3. Robustness to Input Noise on CIFAR-10 (% Accuracy)
Method No Noise Gaussian (Ο=0.1) Gaussian (Ο=0.3) FGSM (Ξ΅=0.1) ResNet-18 94.5 91.2 83.7 78.9 ViT-Small 95.2 92.4 85.1 80.3 Ours 96.1 93.5 87.0 83.7
Table 4. Generalization to Out-of-Distribution (OOD) Data
Method In-Domain (CIFAR-10) OOD (SVHN) OOD (CIFAR-10-C) ResNet-18 94.5 76.3 71.2 ViT-Small 95.2 78.1 73.0 Ours 96.1 80.5 75.3