nbcertcmd -createtoken fails with EXIT STATUS 8000: User does not have permission(s) to perform the requested operation

I wanted to deploy a NetBackup 8.1 client the other day, and one of the steps involved entering a token from master server. My master server was a NetBackup Appliance running software version 3.1.

As per the procedure, I went to the master server and logged in to the NetBackup Web Management Console first:

bpnbat -login -logintype WEB

Afterwards, I ran the following command to generate the token:

nbcertcmd -createtoken -tokenname token_name

But it failed with: EXIT STATUS 8000: User does not have permission(s) to perform the requested operation, despite using “admin” account which had root privileges.

My colleague Hoai (who is a brilliant troubleshooter, by the way) suggested to create a NetBackupCLI account:

  • Go back to CLISH, and navigate to Main_Menu > Manage > NetBackupCLI
  • Run: Create myUser
  • You can change myUser to your user name.
  • Enter the password for your user twice.
  • Once created, log out and log back into the Appliance using the above account.
  • Re-run bpnbat and nbcertcmd commands, they should work.

The vnetd proxy encountered an error

You may encounter this error after adding a new NetBackup 8.1 client in the policy and trying to access it via Host Properties > Clients.

vnetd

One common reason for this error is the new client does not yet have a host ID-based certificate. Why is this so? Well, it could be an Administrator’s oversight during install process, especially if the install was automated using script – such as /usr/openv/netbackup/bin/install_client_files.

What you need to do is simple: deploy the client’s host ID-based certificate manually. The steps:

Go to master server and run:

/usr/openv/netbackup/bin/bpnbat -login -logintype WEB
/usr/openv/netbackup/bin/nbcertcmd -createToken -name token_name

NOTE: You can change the token_name. Once generated, copy the token string.

Now go to the client and run:

/usr/openv/netbackup/bin/nbcertcmd -getCertificate -host client_name -server netbackup -token

Replace client_name with the client’s hostname, and paste the token string when prompted.

Tips for analyzing large tcpdump output file

It is quite common for me to get a large* tcpdump output file to analyze.

WireShark has been my default tcpdump output file parser for a while, and I have absolutely no complain when working with small files. When the size exceeds 1GB though, I find the waiting time is unbearable – for me, at least.

My day-to-day workhorse is a Lenovo W530 i7 with 32GB RAM. With this spec, it takes WireShark at least 5 minutes to load a couple of GBs worth of tcpdump file. Expect similar wait time when I apply the filter.

Fortunately SplitCap came to the rescue!

It takes SplitCap a fraction the amount of time needed by WireShark to filter information. The only problem is its filter definition is not as powerful as the latter. “So why not combine the two?” I thought.

When I get a large tcpdump file these days I will parse it using SplitCap first, filtering only the IP addresses and ports I want. The resulting output will be much smaller and I can load it quickly in WireShark for more in-depth analysis.

* Large = a couple of Gigabytes.

Is your SQL Intelligent Policy backup being skipped randomly?

SQL Intelligent Policy was introduced in NetBackup version 7.7. Thanks to automatic registration of SQL Servers and their instances, it greatly reduces the time it takes to configure SQL backup. You don’t need to create backup scripts manually, either.

I have supported this feature for years now and I think it works really well, except for one minor annoyance: transaction log backups may be skipped randomly under specific condition.

That is, if you combine full, differential and transaction log schedules into one policy, you are asking for trouble. When full or differential backup runs, transaction log backup will not run. But wait, doesn’t Microsoft allow concurrent full/differential and transaction log backups? Indeed, they do.

To take advantage of backup concurrency, what you need to do is simple: Create 2 separate policies, one for full and differential backups, the other for transaction log backup. Don’t worry about the transaction log backup losing its link to the full backup, because NetBackup automatically links them together.

In fact, NetBackup for SQL Administrator’s Guide recommends separating the policies if you have high frequency SQL backups.

Backup jobs have completed but remain active indefinitely

When a backup job has completed, the job’s state should change to “done”. The NetBackup process responsible for updating job details is bpjobd. Besides interacting with Job Manager (nbjm), this process also interacts with NetBackup EMM database (NBDB). While uncommon, there can be occasions where aforementioned database is fragmented or has corrupt index.

If you notice random backup jobs completing but remain active indefinitely, it can be an indication of problem with the NBDB. In this scenario, ideally you would want to do some housekeeping jobs (procedure below) before calling Technical Support.

NBDB Housekeeping:

1. Confirm you have a good, recent full catalog backup.

2. Allocate a maintenance window, because you need to stop NetBackup processes.

3. Shutdown NetBackup:

/usr/openv/netbackup/bin/goodies/netbackup stop

If there are any stubborn processes, try running this to terminate them:

/usr/openv/netbackup/bin/bp.kill_all

4. Copy the content of /usr/openv/db/data/ to another location for a secondary backup.

5. Then start only Sybase database server:

/usr/openv/db/bin/nbdbms_start_server

6. Run a database rebuild:

/usr/openv/db/bin/nbdb_unload -rebuild -verbose
  • If no error, go to step 7.
  • If you do see an error, copy and paste the error message. Stop NetBackup services (netbackup stop), then move out the bad database set and put the old set (the one you copied earlier)

7. Once step 6 is completed, Validate NBDB content by running:

/usr/openv/db/bin/nbdb_admin -validate -full -verbose
  • Again — If no error, go to step 8.
  • If you do see an error, copy and paste the error message. Stop NetBackup services (netbackup stop), then move out the bad database set and put the old set (the one you copied earlier)

8. Compare the size of /usr/openv/db/data after rebuild, is it smaller than original? (you can check your original copy)

9. Stop NetBackup services again:

/usr/openv/netbackup/bin/goodies/netbackup stop

10. Then start all NetBackup services:

/usr/openv/netbackup/bin/goodies/netbackup start

11. Monitor the backups.

Clean way to reconfigure Fiber Transport Media Server (FTMS) on NetBackup Appliance 52xx and 53xx

NOTE: Please refer to my other post for FTMS on a BYO NetBackup Media Server.

While you can theoretically run the same FTMS commands on a NetBackup Appliance as its BYO siblings, you need to be careful with the former because it contains additional monitoring databases, scripts and reports.

The rule of thumb is if you can use CLISH to perform a task, utilize it instead of doing it manually through command line. It ensures the databases, scripts and reports stay in sync. This is especially true for FTMS configuration.

If you accidentally ran commands on your NetBackup Appliance that broke its FTMS configuration, you can follow below steps to reset it again.

NOTE: These steps will require reboots, so please allocate a maintenance window first.

1. Unplug all fiber channel cables from your Appliance. However, confirm beforehand that you only use the connections for FTMS, and not for any other purpose like disk array. If you have disk array connected, arrange with your SAN Admin to gracefully disconnect/unmount the disk array. If you have tape libraries attached, as long as no backups run then they are not used.

Don’t unplug the SAS cables that connect to the disk shelf if you have one.

2. Go to elevated prompt and run the following script to reset the FTMS setting (don’t forget the number 4 at the end):
sh /opt/NBUAppliance/scripts/fcr/clear_san.sh 4

** The appliance will reboot automatically once this step is complete **

3. Configure the SAN client again.
Log in to Appliance CLISH and go to Settings.

Run: FibreTransport SANClient Enable 4

** Once the wizard is completed, you will be prompted to reboot again **

5. After appliance is back, verify the FTM setting. Log back in to the CLISH and go to Settings.

Run: FibreTransport SANClient Show

Check the status. It should show: [Info] Fibre Transport Sever enabled.

Then go to Manage > FibreChannel

Run: Show

Check if there is any error. If all look good, you have your FTMS back. If not, it is best to contact NetBackup Technical Support.

How to check the command and switches being run by an active Windows process

Option 1:

  • Open Task Manager (simply right click an empty space on your task bar and select Task Manager)
  • Click Details tab
  • On the column header section, right click and choose Select columns.
  • Tick Command line and click OK.

Option 2:

  • Open Command window.
  • Run the following command:

Get-WmiObject win32_process -Filter "name like '%bpclntcmd.exe'"|select CreationDate,ProcessId,CommandLine|fl > c:\veritas_troubleshooting.txt</code>

Unfortunately you will need to know the executable name beforehand. In the above example, the executable is bpclntcmd.exe.

nbstlutil dropwg command cannot proceed due to in-process SLP managed images

While uncommon, NetBackup Storage Lifecycle Policy (SLP) jobs may go out of whack due to changes to the destination attributes, for example, the media server, storage server, or back end storage while there are active SLP jobs. The manifestations can be SLP jobs that do not start, SLP backlog that does not seem to clear, or SLP jobs that keep failing.

Often times, this issue can be resolved by flushing existing SLP work groups and let NetBackup automatically re-create fresh ones. The command is simple:

UNIX/Linux Master Server:

/usr/openv/netbackup/bin/admincmd/nbstlutil dropwg

Windows Master Server:

install_dir\Veritas\NetBackup\bin\admincmd\nbstlutil.exe dropwg

This command may refuse to work if there are active SLP jobs. For example:

C:\Program Files\Veritas\NetBackup\bin>nbstlutil dropwg
This operation will remove work group information in the database.  This will in
crease the time taken for SLP processing and may impact ongoing SLP operations.
 Do you wish to proceed ? (y/n):y
Work groups cannot be removed.  There are still in-process SLP managed images

In this scenario, either let the existing active SLP jobs to complete or terminate them manually from the NetBackup Activity Monitor. The good news is they will start again automatically later on, so you don’t have to worry about not having the duplicated/replicated data.

After successfully running nbstlutil dropwg, do not forget to restart the nbstserv process.

UNIX/Linux Master Server:

/usr/openv/netbackup/bin/admincmd/nbstserv -terminate
/usr/openv/netbackup/bin/admincmd/nbstserv

Windows Master Server:

install_dir\Veritas\NetBackup\bin\nbstserv.exe -terminate
install_dir\Veritas\NetBackup\bin\nbstserv.exe

How to use Azure as NetBackup Storage Unit

I have a confession to make: I am a big fan of Mark Russinovich. His Sysinternals tools are indispensable to many Windows users, including myself. Now that he is Azure’s CTO, I can’t wait to see what breakthrough he is going to show us next.

In this post, I am going to show you how to configure NetBackup to back up to Azure cloud storage. My procedure is extracted from the NetBackup Cloud Administrator’s Guide Release 8.1. Refer to the guide for any restrictions.

  1. Create an Azure storage account

Obviously you need to have the main Azure account to begin with.

Navigate to All Resources > Add > Storage > Storage account – blob, file, table, queue.

azure

Follow the wizard. Once your storage account is deployed, take note of the storage account name and access keys.

2. Create an Azure Storage Server in NetBackup

Now launch your NetBackup Administration Console and click the master server’s hostname at the top of the tree. On the right hand pane, click Configure Cloud Storage Server.

nbu1.png

Follow the wizard and select Microsoft Azure for the Storage API type. Continue until you are asked the server name and account details.

nbu2

Storage server name is a logical name of your choice. By default it is called my-azure.

If you have multiple media servers (master is also a media server), select one that you like but make sure NetBackup CloudStore Service Container service, also known as nbcssc, is running. For Windows media server, you can check via Task Manager, or use ps command and grep for nbcssc for UNIX/Linux media server.

Enter the Storage Account and one of the Access Keys you noted earlier.

Click Next and decide whether you want to enable compression and/or encryption. If you do enable encryption, follow the wizard to enter passphrases and key IDs. Keep them in a safe location. Click Next. This will bring you to the final confirmation page.

nbu3.png

If there is no error you will be brought to the Disk Pool configuration wizard.

3. Create the Disk Pool

If you have already existing Azure container(s), they will be reflected in the volume list. If not, you can tell NetBackup to create one by clicking the Add New Volume button.

Enter a container name that you like and click add. This container will be created in Azure and shown under your Azure Storage Account.

If you do enable encryption, you will be prompted to create another passphrase.

Continue further and enter a Disk Pool name. After final confirmation, clicking Next will create the disk pool.

nbu4.png

4. Create the Storage Unit

If you continue with the wizard, you will be prompted to create a storage unit. Simply give it a name and choose whether to use any available media server or only selected media server(s) to transport data.

For any potential media server that can transport data to the cloud, make sure NetBackup CloudStore Service Container / nbcssc service is running.

5. Test backup and restore

At this point, you can create a backup policy and point it to the storage unit you created in previous step. Test both backup and the restore and have fun.