mRemote

mRemote is a full-featuredmulti-tab remote connections manager.

It allows you to store all your remote connections in a simple yet powerful interface.

Currently these protocols are supported:

Binary, as well as source packages are freely available from the downloads page.

Learn more about the features, check out some screenshots and learn how to use mRemote.

For feature requests or bug reports please visit the AppJuice.org forum.

http://www.mremote.org/wiki/Downloads.ashx

BgInfo v4.16

By Bryce Cogswell

Published: October 1, 2009

 

Introduction

How many times have you walked up to a system in your office and needed to click through several diagnostic windows to remind yourself of important aspects of its configuration, such as its name, IP address, or operating system version? If you manage multiple computers you probably need BGInfo. It automatically displays relevant information about a Windows computer on the desktop’s background, such as the computer name, IP address, service pack version, and more. You can edit any field as well as the font and background colors, and can place it in your startup folder so that it runs every boot, or even configure it to display as the background for the logon screen.

Because BGInfo simply writes a new desktop bitmap and exits, you don’t have to worry about it consuming system resources or interfering with other applications.

http://technet.microsoft.com/en-us/sysinternals/bb897557.aspx

福昕PDF阅读器

阅读PDF文档的最佳选择

阅读、注释PDF文档以及填写PDF表单是普通消费者、企业、政府机关及教育机构在日常工作生活中处理PDF文件所经常需要使用到的功能。他们需要一款拥有如下特性的PDF阅读器:

  • 快速 ——在读取文件时不需要长时间的等待。
  • 小巧 ——消耗系统资源少,并易于使用。
  • 安全 ——保护机密信息,防止病毒感染。
  • 平台 ——可根据内部使用或转售的需求,为阅读器新增相应的功能。

福昕PDF阅读器是一款小巧、快速且功能丰富的PDF阅读器,让您能够随时打开、浏览及打印任何PDF文件。不同于其他免费PDF阅读器,它拥有各种简单易用的功能,如:添加注释、填写表格及为PDF文档添加文本等。它是一款占用空间小、启动速度快、浏览迅速且内存占用小的应用软件。这些特性符合手机、掌上电脑等随身装备对PDF阅读器的要求。福昕PDF阅读器便捷以及易于应用的阅读、注释和打印等多种功能,简化了PDF文档的协作流程。它的安全平台能够杜绝恶意攻击,提供坚固的安全壁垒。其安全可靠的数字签名验证体系则可以避免用户在传输电子文件时,文件被篡改或伪造,保证了文件传输的安全性。

http://www.foxitsoftware.cn/

SYSPRP LaunchDll:Failure occurred

Error Info:
2012-05-10 10:22:02, Error [0x0f0082] SYSPRP LaunchDll:Failure occurred while
executing ‘C:\Windows\system32\msdtcprx.dll,SysPrepDtcCleanup’, returned error code –
2146434815[gle=0x000000b7]
2012-05-10 10:22:02, Error [0x0f0070] SYSPRP RunExternalDlls:An error occurred while
running registry sysprep DLLs, halting sysprep execution. dwRet = -2146434815
2012-05-10 10:22:02, Error [0x0f00a8] SYSPRP WinMain:Hit failure while processing
sysprep cleanup providers; hr = 0x80100101

Solution:
Make sure MSDTC is working good.

Refer:
http://ossmall.info/error-message-when-you-run-the-sysprep-generalize-command-on-a-windows-vista-based-computer-a-fatal-error-occurred-while-running-sysprep/

如何彻底删除已经损坏或不用的dc

如果DC迁移,或者多台DC中的某台DC损坏,要退出历史舞台,一定要彻底的卸载,否则,系统会默认存在,一直同步,会出很多问题。怎么做才能彻底的从域中卸载呢?下面介绍一个个人认为很好用的方法:
1、在存活的DC上运行 ntdsutil
2、输入metadata cleanup
3、输入connections
4、输入connect to server 存活DC的主机名(当前的DC就可以)
5、连接成功后,quit推出connect模式
6、输入select operation target(选择站点、域、服务器)
7、输入list domains(列出域)
8、输入select domain *(选择要删除dc所在的域,*代表那个域前面显示的数字)
9、输入list sites(列出站点)
10、输入 select site 0(选择要删除的dc所在的站点,*代表站点前面显示的数字)
11、输入 list servers in site(列出所选站点下的服务器)
12、输入 select server *(*代表你要删除的dc前面的序号)
13、选择以后,输入 quit,推到metadata cleanup下
14、输入remove selected server
15、系统会出现对话框,提示是否删除,选择“是”
16、打开“AD站点和服务”,找到要删除的那个DC,删除即可
17、重启服务器
重启之后打开dns,发现以前的多余的dc的记录也已经随之删除了,删除的很干净
这里提出注意的几个点:
1、在选择域、站点、服务器的时候一定要看清楚,确定是要删除的那个,操作要仔细
2、在选择域、站点、服务器的时候,一定要用前面的序号,直接写名称无效,会报错的
3、在操作过程中,记不住具体的命令没关系,要学会使用“?”,可以列出支持的命令,这样,就可以对照着操作了。

域中站点间不同步问题

Two sites in DC
– Beijing : bjdc01, bjdc02
– Xian: xadc01, xadc02

issue, reinstall bjdc01 and bjdc02 and regenerate the SID for servers, then dcpromo to DC again. I found two sites cannot replicate normally between beijing and xian

fix:
1.remove xadc01 and xadc02 from bjdc01 with ntdsutil
2.shutdown xadc01 and xadc02
3.install two dc in xian named xadc03 and xadc04, and dcpromo them to dc
4.now we find two sites can replicate normally

repl troubleshot command:

repadmin /showrepl

repadmin /kcc

repadmin /showrepl thsglobal.local

repadmin /replsummary

dcdiag /test:connectivity

repadmin /showrepl thsglobal.local

repadmin /replsummary

NLB配置中单播与多播区别

NLB配置中单播与多播区别
2008-12-08 9:40

单播:在每个群集成员上,NLB 覆盖网络适配器上制造商提供的 MAC 地址。NLB 对所有成员都使用相同的单播 MAC 地址。这种模式的优点是它可以无缝地与大多数路由器和交换机协同工作。缺点是到达群集的流量会扩散到交换机虚拟 LAN (VLAN) 上的所有端口,并且主机之间的通信不能通过 NLB 绑定到的适配器,也即实体主机间不可以互相通信。若我们在NLB创建时选择单播的模式,在“群集IP配置”中的“网络地址”是以“02 – BF”开头,后面紧跟IP地址的十六进制表示,该网络地址与实际主机的MAC地址相同,后续加入的主机也将修改为此MAC地址。在Windows server 2003 SP1中,微软修改了NLB单播模式的驱动,从而支持阵列成员通过自己原有的专用IP地址进行通讯,详细信息请参见KB898867,Unicast NLB nodes cannot communicate over an NLB-enabled network adaptor in Windows Server 2003。

  1. 单击 开始 ,单击 运行 ,键入 regedit ,然后单击 确定
  2. 找到并单击下面的注册表子项:HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\WLBS\Parameters\Interface\{GUID}
    注意 {GUID} 占位符代表特定的 NLB 实例的 GUID。 您可以在此配置单元中使用 ClusterIPAddress 子项来标识不同的 NLB 群集。
  3. 编辑 菜单上, 单击 新建 ,单击 DWord 值 ,然后添加以下数值数据。 收起该表格展开该表格
    值名称 UnicastInterHostCommSupport
    1

    注意 如果将 UnicastInterHostCommSupport 注册表项设置为任何非零的值,单播 InterHost 通信支持将被启用。

  4. 退出注册表编辑器。
  5. 打开命令提示符下,然后键入以下命令:NLB RELOAD

多播:保留原厂 MAC 地址不变,但是向网络适配器中增加了一个第 2 层多播 MAC 地址。所有入站流量都会到达这个多播 MAC 地址。优点是这种方法可以通过在交换机的“内容可寻址存储器”(CAM) 表中创建静态项,从而使得入站流量仅到达群集中的主机。缺点是因为 CAM 项必须静态关联一组交换机端口,如果没有这些 CAM 项,入站流量仍然会扩散到交换机 VLAN 上的所有端口。还有一个缺点就是很多路由器不会自动将单播 IP 地址(群集的虚拟 IP 地址)与多播 MAC 地址关联起来。如果进行静态配置的话,一些路由器可以存在这种关联。若我们在NLB创建时选择多播的模式,在“群集IP配置”中的“网络地址”是以“03 -BF”开头,后面紧跟IP地址的十六进制表示。在选择多播模式时,后面还有个复选项“IGMP Multicast(IGMP多播)”,若复选此项,就像多播操作模式一样,NLB 保留原厂 MAC 地址不变,但是向网络适配器中增加了一个 IGMP 多播地址。此外,NLB 主机会发出这个组的 IGMP 加入消息。如果交换机探测到这些消息,它可以使用所需的多播地址来填充自己的 CAM 表,这样入站流量就不会扩散到 VLAN 上的所有端口。这是这种群集模式的主要优点。缺点是有一些交换机不支持 IGMP 探测。除此之外,路由器仍然支持单播 IP 地址到多播 MAC 地址的转换。在IGMP多播模式下,将采用“01 – 00 – 5E”开头的MAC地址。在多播的模式下,实体主机之间可以互相通信。
一般来说,在NLB的创建时,单网卡多播,双网卡单播。双网卡单播时,因为主机之间不能互相通信,将设置内网通讯的网卡,也就是群集设置中的心跳。在微软官方推荐在NLB设置时,首先考虑单播模式,除非单播不能满足其要求,若要解决流量扩展的方法,推荐使用VLAN。

http://hi.baidu.com/hneli/blog/item/656725d3e5471433970a16bd.html

VMware ESXi and ESX Info Center

It Is Time to Upgrade to vSphere 5 and Migrate from ESX to ESXi!

VMware vSphere 5.0 has finally arrived and includes several new unique features – such as Storage DRS and Autodeploy – that deliver unprecedented value to VMware customers. Unlike prior versions, vSphere 5 supports only the ESXi hypervisor architecture, the only thin purpose-built hypervisor that does not depend on a general purpose operating system. In order to benefit from the unique capabilities and features of vSphere 5 and ESXi 5, VMware recommends that customers:

Learn About ESXi — VMware’s Most Advanced Hypervisor Architecture

Like its predecessor ESX, ESXi is a "bare-metal" hypervisor, meaning it installs directly on top of the physical server and partitions it into multiple virtual machines that can run simultaneously, sharing the physical resources of the underlying server. VMware introduced ESXi in 2007 to deliver industry-leading performance and scalability while setting a new bar for reliability, security and hypervisor management efficiency.

Deploy ESXi — VMware’s Most Advanced Hypervisor Architecture

So how isESXi different from ESX? While both architectures use the same kernel to deliver virtualization capabilities, the ESX architecture also contains a Linux operating system (OS), called "Service Console," that is used to perform local management tasks such as executing scripts or installing third party agents. The Service Console has been removed from ESXi, drastically reducing the hypervisor code-base footprint (less than 150MB vs. ESX’s 2GB) and completing the ongoing trend of migrating management functionality from the local command line interface to remote management tools.

ESXi and ESX Architectures Compared

VMware ESX Architecture. In the original ESX architecture, the virtualization kernel (referred to as the vmkernel) is augmented with a management partition known as the console operating system (also known as COS or service console). The primary purpose of the Console OS is to provide a management interface into the host. Various VMware management agents are deployed in the Console OS, along with other infrastructure service agents (e.g. name service, time service, logging, etc). In this architecture, many customers deploy other agents from 3rd parties to provide particular functionality, such as hardware monitoring and system management. Furthermore, individual admin users log into the Console OS to run configuration and diagnostic commands and scripts.

VMware ESXi Architecture. In the ESXi architecture, the Console OS has been removed and all of the VMware agents run directly on the vmkernel. Infrastructure services are provided natively through modules included with the vmkernel. Other authorized 3rd party modules , such as hardware drivers and hardware monitoring components, can run in vmkernel as well. Only modules that have been digitally signed by VMware are allowed on the system, creating a tightly locked-down architecture. Preventing arbitrary code from running on the ESXi host greatly improves the security of the system.

Benefits of VMware ESXi Hypervisor Architecture

The hypervisor architecture of VMware vSphere plays a critical role in the management of the virtual infrastructure. The introduction of the bare-metal ESX architecture in 2001 significantly enhanced performance and reliability, which in turn allowed customers to extend the benefits of virtualization to their mission-critical applications. Once again, the introduction of the ESXi architecture represents a similar leap forward in reliability and virtualization management. Less than 5% of the size of ESX, VMware ESXi runs independently of an operating system and improves hypervisor management in the areas of security, deployment and configuration, and ongoing administration.

Improve Reliability and Security. The older architecture of VMware ESX relies on a Linux-based console operating system (OS) for serviceability and agent-based partner integration. In the new, operating-system independent ESXi architecture, the approximately 2 GB console OS has been removed and the necessary management functionality has been implemented directly in the core kernel. Eliminating the console OS drastically reduces the codebase size of ESXi to approximately 100 MB improving security and reliability by removing the security vulnerabilities associated with a general purpose operating system.

Streamline Deployment and Configuration. ESXi has far fewer configuration items than ESX, greatly simplifying deployment and configuration and making it easier to maintain consistency.

Reduce Management Overhead. The API-based partner integration model of ESXi eliminates the need to install and manage third party management agents. You can automate routine tasks by leveraging remote command line scripting environments such as vCLI or PowerCLI.

Simplify Hypervisor Patching and Updating. Due to its smaller size and fewer components, ESXi requires far fewer patches than ESX, shortening service windows and reducing security vulnerabilities. Over its lifetime, ESXi 3.5 required approximately 10 times fewer patches than ESX 3.5

Reference Link: http://www.vmware.com/products/vsphere/esxi-and-esx/overview.html