Category: Others (其它)

Another Poweredge Firmware Update Nightmare: 2nd Time Disappointment

By admin, December 23, 2010 1:11 pm

Dell has a new “Build-in OS” called System Service (USC) that you can simple update firmware and deploy different OS using the simple GUI. I like it a lot for OS deployment as it indeed saved me a lot of time in finding different drivers, but until recently, I found the specific USC module that is the Platform (Firmware) update feature isn’t very stable and may cause various system problems.

The following is what I’ve encountered:

I press F10 during boot time and enter USC and select Platform Update for firmware update process, configure IP for USC and then next connect to ftp.us.dell.com to check a list of update firmware, after 5 minutes, as usual, USC found a list of outdated firmware (about 15 of them) and asked me if I would like to continue to update those. Great! then click apply, “Please Wait” is what I get on screen and wait, and wait and wait, I waited for more than 2 hours and definitely sensed some kind of strangeness as my last update using USC for BIOS and IDRAC only took 15 mins.

Immediately I called Dell Pro-Support regarding this issue, they suggested me to wait and they ensure me everything should be ok, as even bad thing happens, I can still use Firmware Roll Back under USC, and if a particular firmware download process isn’t completed then the update won’t carry on.

O.K., then I went to sleep, after 12 hours, I checked the server, it is still showing the annoying “Please Wait”, NOTE there is NO WAY for me to know the status of the update progress as it won’t tell me which module/firmware it has finished downloading or completed the upgrade process for a particular firmware.

I have no choice but to reboot the server, luckily, the server (OS is Windows Server 2008 R2) came back ok, and I checked the firmware information in OpenManage, NOTHING HAS BEEN UPDATED, and worst some of the sensors in OpenManage are gone such as temperature/voltage/fan, etc. Right away I KNEW iDRAC Must Have Been Damaged during the firmware update under USC, so I connected to R610 iDRAC and indeed found all of the sensors showing reading errors. Huh???

Then I thought by updating the iDRAC firmware may help, so I tried to update iDRAC6 to latest 1.5.4 using the standard windows firmware file, immediately error shows “This update package is not compatible with your system configuration”, thank you very much!

After searching Google with no solution found, I suddenly thought of a way to update it by downloading the raw iDRAC6 firmware update file and using iDRAC’s own update feature from Remote Access > Update and I have successfully updated it! Yeah! Then I reboot the server, and checked OpenMange again, thank god, everything is back to normal this time!

Finally, I simply go back to the traditional way of updating firmware on Poweredge which I have been doing for the past 10+ years, that is download the firmware one by one and update my Poweredge R610 BIOS/Network/Raid etc one by one.

So my conclusion is:

1. Platform Update utility under USC is NOT A MATURED PRODUCT!

2. Probably USC FTP process hanged, network issue, but USC won’t alert at all, just showing “Please Wait” that really DOESN’T HELP AT ALL!

3. You NEED TO UPDATE iDRAC first (probably using the above method) before using USC, as USC uses iDRAC to facilitate the rest firmware upgrade process. I remembered ont time USC warned me it will upgrade iDRAC first before anything else, but it didn’t warm me this time.

Dell please correct this bug ASAP in USC Platform Update, at least show us the update process bar or some useful detail information!

 

If you want to read about my first disappointment regarding firmware update, please refer to Do not update firmware/BIOS from within ESX console.

 

Update:

Just received an update from Virtualization Buster, which is the proper way to update firmware using an USB method via F10 Unified Server Configuration (USC).

Updating Dell R-Series aka 11-th Generation Servers via USB and Repository Manager

http://www.virtualizationbuster.com/?p=1301

Finally, since there is no way to attach an USB via iDRAC6, so if you need to update Driver thru USC, you can use Dell Repository Manager, export the drivers to an ISO. (Tips from Dell Pro-Support)

Veeam B&R 5: Block Size Optimization and Reversed Incremental or Synthetic Full

By admin, December 20, 2010 12:31 pm

Saw these two piece of useful information on Veeam’s forum today, mostly contributed by Tom:

Block Size Optimization

In the job properties, on the “Backup Destination” if you hit “Advanced” and then select the “Storage” tab with V5 you can now select to optimize for Local disk, LAN target, and WAN target. What this is really doing is setting the “block size” of the VBK. Previous version of Veeam always used 1MB blocks, which is now the equivalent of the “Local disk” option. LAN target uses 512KB blocks, and WAN target uses 256KB blocks. The smaller blocks sizes typically mean that incrementals are smaller and thus less data is transferred over the LAN or WAN at the cost of some CPU. Because we push our backups across sites we always use the WAN target settings.

 

Reversed Incremental or Synthetic Full

As a general rule, reverse incrementals are going to use the least amount of space on disk, but the most amount on tape, while the reverse is true of forward incrementals, they will use more disk space, but significantly less tape space. The only exception might be if you have a fairly small retention period.

Assume you have a 100GB full backup with 10GB of changes a day, here’s how the space would break down assuming 4 weeks retention:

Reverse Incremental:
Disk — 100GB Full + 280GB (10GB/day * 28 days) of reverse incrementals = 380GB
Tape — 100GB VBK copied to tape every day * 28 days = 2.8TB

Forward Incremental w/Synthetic Full:
Disk — 400GB (100GB Full * 1 per week) + 240GB (10GB/day incrementals * 24 days) = 640GB
Tape — Same as disk, since you simply copy the full or incremental to tape every day = 640GB

So, the Forward Incremental/Synthetic option in this scenario would use ~70% more disk space, but less than 25% of the tape space. If you’re planning to keep only a short period of disks on retention and use tape for long term storage then forward incrementals will save space, but that’s about the only scenario where it will save space. For the best space savings with on disk retention, reverse incremental are the way to go, but at the cost of a large amount of tape space.

 

Personally I prefer reversed incremental because my main strategy is to get a full working backup to tape every night without the need to merge increments when in an emergency need to get my files back from tape.

YY(Yonex)的ISO Metric果然確名不虛傳

By admin, December 14, 2010 9:29 pm

多謝六叔同Will和我今天爆左好多板﹐平時最弱的開波今日竟然發揮到﹐靠它拿了不少分﹐總結就是絕對不能懶﹐由拋球﹑彎腰﹑weight transfer到擊球﹐樣樣都不能懶﹐還有就是上次師兄提醒我拋球是要拋去左邊一些﹐才能做出大角度和多些Spin﹔加上Wolf兄在星期日示範了如何運用前臂去包球而產生更加多的旋轉和向前沖的力量﹐這些都增加了今日的發球的穩定性和力量。

但平日最有信心的正手底線抽擊則在正式開波後就頻頻失效﹐Rally時底線抽擊還好地地﹐不關節奏﹑不關球速度﹐到底是為什麼﹐拉拍到收拍做足晒﹐可是就是沒威力和頻頻落網﹐我現在也想不清楚﹐可能還是因為自己走的不夠快﹐打波靠對腳也就是這個道理了。

最後嘗試了Will的新YY Ezone球拍﹐YY 的ISO Metric果然確名不虛傳﹐跟之前試過的一系列YY一樣﹐真的很SPIN和容易控制尤其是上網和Approach Shot﹐而且手感極好﹗

以後有機會一定要再請教一下用YY的師兄們﹐問問他們多些關於YY的意見﹐開始越來越喜歡YY了﹐因為始終想找一個POG的代替品。(POG已經是一個近25年歷史的爺爺了﹐但實在太喜歡它的手感了)

 

ezone01[1]

讓子彈飛

By admin, December 10, 2010 10:59 pm

此片極有可能是本年度最期待的華語電影之一﹐光看主角就已經是陣容鼎盛﹗

姜文﹑周潤發和葛優﹐三位都是極有份量的演員。尤其是葛爺﹐想不到他今年新年檔期﹐一人就擔當了3個大片的主角﹗

01

尋找回發球的信心

By admin, November 16, 2010 10:16 pm

今天在一片高興的氣氛下結束了刺激的雙打比賽。我很驚訝地發現我今天的發球是如此的得心應手﹐很可能是今天的拋球姿勢正確了﹐所以出球無論是力量和旋轉都有了根本的變化﹐而且最主要是心理方面的調整﹐完全放開包袱和儘量放膽去開球﹐出來的效果跟昨天判若兩人﹐加上多了網前VOLLEY主動出擊﹐自己都覺得表現滿意﹐在場的另外3位可以為我做證。

恍如隔世 Back from Hell!

By admin, November 14, 2010 11:19 pm

時間不知不覺飛一樣地過了3個月﹐我也終于捱過了這極度艱苦的3個月。

因為8月開始公司需要進行大型的系統升級而且加上時間緊迫﹐所以這段期間連睡覺時間都很寶貴﹐每天幾乎都要工作長達13-15個鐘頭﹐最低時薪$28肯定達不到。 (對IT/Network/Server有興趣的朋友﹐可以到我的Blog看看詳情)

有時在街上遇見波友﹐都只好苦笑一下﹐很無奈﹐沒辦法﹑生計緊要﹐網球只能排第二。

Now I am back from Hell!   又可以從新開始新的波季﹗

真心希望可以和大家從新再次享受Happy Tennis!  

單雙打無所謂﹐最緊要開心﹗放心我沒退步太多﹐因為上星期打了3個月正式的第一場﹐感覺十分良好。

而且這3個月我心理上得到了“極度充實的壓力測試”鍛煉﹐這下應該可以在球場上發揮得更加臨危不亂了。

人生中總有幾次大起伏﹐正如政府宣傳講“方法總比困難多”﹐只要從容面對﹐加上努力多數都可以克服﹐即使達不到﹐也無需要後悔。

而且我這次學會了放棄放下一些長期的心結﹐原來之後得到的更多。

Just like in a tennis game, sometimes, you need to learn when to let go for a shot or even a game and prepare for the next one that may eventually lead to a victory in a match.

正所謂”置之死地而后生“﹐我完全明白了這個道理﹐當然﹐過程之辛苦(尤其是心理上)﹐我想經歷過類似困難的朋友都應該知道我的意思。

Inception

By admin, November 6, 2010 3:22 pm

看著他從13年前Titanic裡青澀的Jack到The Departed裡火爆的Billy﹐到近期一連兩套類似的戲Shutter Island裡面的Teddy和今天Inception裡的Cobb﹐Leonardo Dicaprio真的是越來越有魅力﹐短短十年內﹐演技已經達到爐火純青的地步﹐Hollywood有此等人才的確難得。

回到此片﹐可以說是近年來自Matrix後又一令觀眾深思的電影﹐它極有可能會成為以後這種題材的經典教材。

InceptionPoster3WBHD-691x1024

Veeam Backup Space Required: Formula and Calculation

By admin, October 31, 2010 12:03 pm

Found this really useful information in the Veeam’s forum (contributed by Anton), a great place to learn about the product and their staff are all very caring and warm.

The formulas we use for disk space estimation are the following:

Backup size = C * (F*Data + R*D*Data)
Replica size = Data + C*R*D*Data

Data = sum of processed VMs size (actually used, not provisioned)

C = average compression/dedupe ratio (depends on too many factors, compression and dedupe can be very high, but we use 50% – worst case)

F = number of full backups in retention policy (1, unless periodic fulls are enabled)

R = number of rollbacks according to retention policy (14 by default)

D = average amount of VM disk changes between cycles in percent (we use 10% right now, but will change it to 5% in v5 based on feedback… reportedly for most VMs it is just 1-2%, but active Exchange and SQL can be up to 10-20% due to transaction logs activity – so 5% seems to be good average)

Some interesting findings about Veeam Backup v5

By admin, October 29, 2010 11:56 am

The followings are my own findings about Veeam Backup v5: 

  • If you have more than 30-50 VM (average 20-30GB) to backup, it’s better to setup a 2nd Veeam Backup server to load-balance the job. So eventually, you may have a number of load-balanced Veeam Backup servers to evenly spread out the loading, and this will greatly reduce the queue time and shorten the total backup windows.Don’t Worry, all Veeam Backup servers are managed centrally by Veeam Enterprise Manager (and the best part is additional Veeam Backup servers DOES NOT count towards your Veeam licenses, how thoughtful and care Veeam is, thank you!)It’s recommended by Veeam to create 3-4 jobs per Veeam Backup server due to high CPU usage during the backup windows for de-duplication and compression. However this doesn’t apply Replication which doesn’t use de-duplication and compression.Note: However there is another way to fully utilize your one and only Veeam Backup Server by adding more than 4 concurrent jobs. (See quote below)
  • The “Estimated required Space” within Job description is incorrect as it’s a know bug that VMware API or Veeam doesn’t know how to interpret Thin-Provisioning volume yet, so estimate by yourself. Most of the time, it will over-estimate the Total Required backup space by 3-10x more, so don’t worry!
  • After each job completes, you will see a Processing rate: (for example 251 MB/s), which means total size of VM divided by total time it took to process the VM. This is affected by time it takes to talk to vCenter, freeze guest OS, create and remove snapshot, backup small VM files (configuration), and backup actual disks.

I am still not sure if the target storage requires very fast I/O or high IOPS device like 4-12 RAID10/RAID50 10K/15K SAS disks as some say the backup process with de-duplication and compress is random, some say it’s sequential, so if it’s random, then we need fast spindle with lots of disks, if it’s sequential, then we only need cheap 7200RPM SATA disks, a 4 RAID 5 2TB will be the most cost effective solution for storing the vbk/vik/vrk images.

Some interesting comments quote from Veeam’s forum relating to my findings:

That’s why we run multiple jobs. Not only that, but when doing incremental backup, a high percentage of the time is spent simply prepping the guests OS, taking the snapshot, removing the snapshot, etc, etc. With Veeam V5 you get some more “job overhead” if you use the indexing feature since to system has to build the index file (can take quite some time on systems with large numbers of files) and then backup the zipped index via the VM tools interface. This time is all calculated in the final “MB/sec” for the job. That means that if you only have a single job running there will be lots of “down time” where no transfer is really occurring, especially with incremental backups because there’s relatively little data transferred for most VM’s compared to the amount of time spent taking and removing the snapshot. Multiple jobs help with this because, while one job may be “between VM’s” handling it’s housekeeping, the other job is likely to be transferring data.

There are also other things to consider as well. If you’re a 24×7 operation, you might not really want to saturate your production storage just to get backups done. This is admittedly less of an issue with CBT based incrementals, but used to be a big deal with ESX 3.5 and earlier, and full backups can still be impacting to your production storage. If I’m pushing 160MB/sec from one of my older SATA Equallogic arrays, it’s I/O latency will shoot to 15-20ms or more, which severely impacts server performance on that system. Might not be an issue if your not a 24×7 shop and you have a backup window where you can hammer you storage as much as you want, but is certainly an issue for us. Obviously we have times that are more quiet than others, and our backup windows coincide with our “quiet” time, but we’re a global manufacturer, so systems have to keep running and performance is important even during backups.

 

Finally, one thing often overlooked is the backup target. If you’re pulling data at 60MB/sec, can you write the data that fast? Since Veeam is compressing and deduping on the fly, it can have a somewhat random write pattern even when it’s running fulls, but reverse incrementals are especially hard on the target storage since they require a random read, random write, and sequential write for ever block that’s backed up during an incremental. I see a lot of issue with people attempting to write to older NAS devices or 3-4 drive RAID arrays which might have decent throughput, but poor random access. This is not as much of an issue with fulls and the new forward incrementals in Veeam 5, but still has some impact.

 

No doubt Veeam creates files a lot differently that most vendors. Veeam does not just create a sequential, compressed dump of the VMDK files. 

Veeam’s file format is effectively a custom database designed to store compressed blocks and their hashes for reasonably quick access. The hashes allow for dedupe (blocks with matching hashes are the same), and there’s some added overhead to provide additional transactional safety so that you VBK file is generally recoverable after a crash. That means Veeam files have a storage I/O pattern more like a busy database than a traditional backup file dump.

如果吸毒會毀一生,那麼玩攝影就會窮三代 (轉文)

By admin, October 28, 2010 10:00 am

14天前:明天是還在外地讀書的女朋友生日,買了個佳能50D套机【7800RMB】照點相片紀念。新机到手于是先跑到公園試拍,正在公園拍攝荷花的老頭說佳能成像太肉,尼康比較銳利,還現場用事實說話,于是后悔。

13天前:把佳能相机干脆當做禮物送給了女朋友,自己回家路上買了尼康D300s套机16-85mm【14800RMB】,另外還買了台筆記本電腦【5400RMB】,不然照片照出來沒得地方觀賞。

12天前:發現焦距不夠長,于是發現尼康AF-S DX VR 18-200mm F3.5-5.6GIF-ED【5600RMB】可以一鏡走天下,安逸,于是買了它。買來當天就去打鳥了。正巧遇見一些攝影家也在河邊拍白鶴,天呢,別人用的是尼康AF-S 300mm F4DIF-ED鏡頭,一問价格【8000RMB】,再看看別人的照片,如果不買一個怎么能拍出別人那樣的好看的照片呢,再次出血。本來有個尼康AF-S300mm F2.8 IF-ED II的,但是要三万多,買不起。

11天前:前兩天認識几個影友,說今天去拍模特人像,說85/1.4D【7000RMB】很不錯,買一個帶去會有很好的收獲,于是下手買了它。帶去拍攝的時候發現想拍兩張模特全身都比較困難。很懊惱。

10天前:我家妹妹說給她拍點照片做相冊,一想要是50/1.4mm【4000RMB】多好,于是馬不停蹄買了個。莫說,這回是比昨天照得更全面一些。但是總覺得比較肉。

9天前:單位喊我去幫忙拍攝生產車間的全貌,把全部鏡頭掏出來,都沒得一個超廣角,為了在領導面前顯示自己攝影很得行,于是立馬買了個12-24mm/F4G IF-ED【8200RMB】。

8天前:今天加入了攝影協會,攝影群,瀏覽攝影网站[攝影無忌];[太平洋攝影];[大眾攝影];[車壇影協];[新攝影];[蜂鳥网];[中國攝影家];[路客驢舍];[橡樹攝影]等等,發現一個問題,全畫幅相机可以在成像上得到更大优勢。主要的就是我昨天的車間拍攝最為例子。要是尼康D700【13800RMB】配上12-24mm/F4GIF-ED,省去乘那1.5的系數,還有那超高感光,那就是什么都解決了。通過网絡的學習,決定添置D700全畫幅是沒有錯的。于是把D300s套机送給了自家妹妹。

7天前:和攝影家協會的人員來往,听說D700和12-24;24-70;70-200是全世界最絕配的搭檔。都上全畫幅單反了,不上牛頭那怎么行?由于太貴,所以敗了個水貨24-70/2.8【12000RMB】,70-200/2.8【14500RMB】。

6天前:買了不少攝影書,訂了不少報紙刊物【1200RMB】,上班下班都在看,研究軟件處理照片,結識了一些攝影發燒友。

5天前:發燒友們的照片,几乎都是定焦拍攝的,變焦靠走,成像質量上等一流。确實這些天來用了不少錢,手頭緊張,据說定焦也用得比較少,于是就買了兩個副厂的貨,一個是适馬30mm F1.4 EX DC HSM鏡頭【3300RMB】,一個騰龍SP AF 180mm F3.5 Di LD-IF鏡頭【8300RMB】,在圖麗鏡頭里面确實是選不出一個定焦。

4天前:發現适馬鏡頭照片色彩偏綠,銳度和飽和度都比較惱火,騰龍照出來一片灰調,卡白卡白的。這才明白一個道理,為什么發燒友們用的都是原裝定焦頭,真是貴得所值。那現在怎么辦,只有再選原厂鏡頭。再次敗了尼康AF DX Fisheye 10.5mm F2.8G ED鏡頭【5500RMB】,尼康 Ai AF 18mm F2.【8100RMB】,尼康AF-S Micro NIKKOR 60mm F2.8G ED鏡頭【5500RMB】。還有個确實不敢買的尼康PC-E NIKKOR 24mm f/3.5D ED鏡頭,這可是個移軸鏡頭,拍建筑非常好,兩万多塊錢喲。

3天前:差中長焦遠射定焦頭,再次購得尼康Ai AF DC 135mm F2D鏡頭【7200MRB】,外加前几天買的300/F4已經夠了,像尼康AF-S 600mm F4G ED VR鏡頭确實不敢買,市場价九万多。

2天前:听說鏡頭保護不好容易發霉,需要干燥放置,于是買了個電子干燥箱,花了我一個月的工資【4000RMB】。中午听說路客驢舍戶外組織爬山露營以及攝影采風活動,于是帶上了相机器材和露營裝備等物出去了。

昨天:由于前一天參加活動,相机太重,導致脖子扭傷及病倒。去醫院看病和保健按摩花費【250RMB】,出院回家路上順便去了趟攝影器材城,給商家講述自己病的情況,結果喊我買了卡片机佳能G11【4350RMB】。本來分別有2万多元的徠卡M8和六万多的徠卡M9的,可錢包里面怎么也沒得那么多錢。

今天:女朋友放假來到我家,打開她背包一看,除了我送她的佳能套机外包包里面多了17-40/4.0【4800RMB】;50/1.8【700RMB】;70-200/2.8L is USM【13000RMB】外加一些配件和包包【2800RMB】,當場我就傻了,她把這些年的獎學金全部用光了。打電話喊妹妹回來陪未來嫂子耍,妹妹說正在器材城看全畫幅相机,當場耳鳴頭暈。

 

PS:功夫不負有心人,下午一年多沒有聯系的出版社編輯朋友說來家拷點我的照片印到書上去,确實讓我高興了不少,走的時候問他出什么書,他說《錯誤的曝光与构圖》示范實例教科書,當時就想一腳架碾過去。

明天:以我現在每月2000元的支付能力計算,我將需要N年才能來完成還清在朋友那和銀行的借貸款。當然期望快點漲工資,我還有個尼康D3X沒有買【45000元】以及剛剛听別人介紹的瑪米亞DM28中畫幅數碼相机【十一万左右】。

未來:我也要四處宣傳玩單反相机的好。

 

PS:一年后,和女友結婚去了外地工作,家里房屋拆遷,70歲的奶奶在家把所有相机鏡頭以每斤5毛錢賣給了廢鐵收購站。唯獨還剩下一個佳能50/1.8,收廢鐵的說全是塑料,不要!

Pages: Prev 1 2 3 4 5 6 7 ...87 88 89 ...102 103 104 Next