1.1 meta

👁 5 👍 0 💬 0 字数 738 阅读 3 分钟

\ctitle{带单点度数恰好约束的词典序最小生成树算法}

\cauthor{数学强基 2301 刘欣楠}

\csubject{}

\csupervisor{}

\ckeywords{词典序最小生成树;度限制生成树;单顶点度约束;贪心算法;图论}

\cproddate{\the\year 年\the\month 月}

\ctype{}

\etitle{An Algorithm for Lexicographically Minimal Spanning Trees with an Exact Single-Vertex Degree Constraint}

\eauthor{Xinnan Liu}

\esubject{}

\esupervisor{}

\ekeywords{Lexicographically minimum spanning tree; Degree-constrained spanning tree; Single vertex degree constraint; Greedy algorithm; Graph theory}

\ecate{}

\eproddate{\monthname{\month}\ \the\year}

\etype{}

\cabstract{ 词典序最小生成树(Lexicographically Minimum Spanning Tree, LMST)是在生成树优化中较为严格的一类问题,其目标是在比较两棵生成树的边权序列时,使最早出现差异的位置尽可能小。度限制生成树(Degree-Constrained Spanning Tree, DCMST)在网络设计、负载均衡等领域有重要应用,但一般情形下为 NP-困难问题。本文研究了其中的一个特殊情形:在给定无向连通图、边权互异的条件下,求解满足指定顶点度数恰好为 $k$ 的词典序最小生成树。

针对该问题,我们提出了一种基于 Kruskal 算法、强制边选择与候选边分类的高效算法。该算法首先在不使用与指定顶点相连的边的情况下构造最小生成森林,并强制选择各连通分量中与该顶点相连的最小权值边以保证连通性;随后将剩余候选边划分为可改善词典序的一类边与可能恶化词典序的二类边,按严格的优先顺序进行增广,直至满足度数约束。我们给出了算法的严格正确性证明,证明其输出结果在所有可行解中唯一且词典序最优。

理论分析表明,该算法的时间复杂度为 $O(m \log m)$,空间复杂度为 $O(m)$。实验结果在随机图与实际网络数据上验证了算法的高效性与优越性,能够在保证最优性的同时处理规模达到 $10^5$ 条边的图实例。

}

\eabstract{ The lexicographically minimum spanning tree (LMST) problem seeks a spanning tree whose edge-weight sequence, sorted in non-decreasing order, is lexicographically smallest among all candidates. Degree-constrained spanning trees (DCMST) have important applications in network design, load balancing, and other domains, but the general problem is NP-hard. In this paper, we focus on a special case: given a connected undirected graph with distinct edge weights, find an LMST where a designated vertex has degree exactly $k$.

We propose an $O(m \log n)$ time algorithm based on Kruskal's method, mandatory edge selection, and classification of candidate edges. The algorithm first constructs a minimum spanning forest without edges incident to the designated vertex and connects each component to it using the smallest available edge to ensure connectivity. Remaining candidate edges are classified into TypeI (improving) and TypeII (degrading) edges according to their impact on the lexicographic order, and are processed in a strict priority sequence until the degree constraint is satisfied. We present a rigorous exchange-based correctness proof showing that the output is unique and lexicographically optimal.

Theoretical analysis shows that the algorithm runs in $O(m \log m)$ time and $O(m)$ space. Experimental results on both random graphs and real-world network data confirm the algorithm’s efficiency and superiority, handling instances with up to $10^5$ edges while guaranteeing optimality.

}

评论 0
评论加载中...