Troubleshooting Slowness with Traffic, Management

Environment

Resolution

Below are some commands (with a brief description) which can be useful in troubleshooting Management or Traffic-related issues. The issues can vary from persistent to intermittent or sporadic in nature.

NOTE: This document is a general guideline and should not be taken as the final diagnosis of the issue.

show system info

This command will provide us a snapshot of the model, PAN-OS, dynamic updates (app, threats, AV, WF, URL) versions, among other things. The ' uptime ' mentioned here is referring to the dataplane uptime. This will reset if thedata plane or the whole device has been restarted.

admin@anuragFW> show system info hostname: anuragFW ip-address: 10.21.56.125 netmask: 255.255.254.0 default-gateway: 10.21.56.1 ip-assignment: static ipv6-address: unknown ipv6-link-local-address: fe80::20c:29ff:fecb:50ab/64 ipv6-default-gateway: mac-address: 00:0c:29:cb:50:ab time: Mon May 29 23:18:55 2017 uptime: 18 days, 22:00:54 family: vm model: PA-VM serial: 007000006243 vm-mac-base: 00:1B:17:F4:9B:00 vm-mac-count: 256 vm-uuid: 564DF4C3-7C1D-36AC-B58E-37AE79CB50AB vm-cpuid: E4060300FFFBAB1F vm-license: VM-100 vm-mode: VMWare ESXi sw-version: 8.0.2 global-protect-client-package-version: 4.0.0 app-version: 703-4048 app-release-date: 2017/05/25 21:17:50 av-version: 2256-2743 av-release-date: 2017/05/27 04:01:19 threat-version: 703-4048 threat-release-date: 2017/05/25 21:17:50 wf-private-version: 0 wf-private-release-date: unknown url-db: paloaltonetworks wildfire-version: 144067-145602 wildfire-release-date: 2017/05/29 10:22:50 url-filtering-version: 20170529.20212 global-protect-datafile-version: unknown global-protect-datafile-release-date: unknown global-protect-clientless-vpn-version: 63-78 global-protect-clientless-vpn-release-date: 2017/05/15 13:47:50 logdb-version: 8.0.15 platform-family: vm vpn-disable-mode: off multi-vsys: off operational-mode: normal

show system resources <follow>

This command provides real-time usage of Management CPU usage. The ' up ' mentioned here refers to the uptime of the Management plane. This command can also be used to look up memory usage and swap usage if any. Ideally, the swap memory usage should not be too much or degrade, which would indicate memory leak or simply too much load. This command follows the same format as running 'top' command on Linux machines. If the commits are taking too long (longer than an established "baseline"), high management CPU can be one of the causes.

admin@anuragFW> show system resources follow top - 23:20:35 up 18 days, 22:02, 1 user, load average: 0.24, 0.64, 0.56 Tasks: 130 total, 2 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 2.9%us, 5.4%sy, 0.7%ni, 90.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 6577072k total, 6235716k used, 341356k free, 112720k buffers Swap: 0k total, 0k used, 0k free, 3751268k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30823 root 30 10 23464 1388 1008 S 3.9 0.0 0:00.02 top 30822 admin 20 0 23464 1416 1016 R 2.0 0.0 0:00.02 top 1 root 20 0 16548 148 0 S 0.0 0.0 0:22.39 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 5:39.32 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 7 root RT 0 0 0 0 S 0.0 0.0 0:02.44 migration/0 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 9 root 20 0 0 0 0 S 0.0 0.0 2:54.19 rcu_sched 10 root RT 0 0 0 0 S 0.0 0.0 0:00.02 migration/1 11 root 20 0 0 0 0 S 0.0 0.0 0:57.69 ksoftirqd/1 12 root 20 0 0 0 0 S 0.0 0.0 1:26.35 kworker/1:0 13 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 230 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 writeback 233 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 bioset 234 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kblockd 451 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 xenbus_frontend 457 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 ata_sff 464 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khubd 473 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 md 476 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 edac-poller 576 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 rpciod 578 root 20 0 0 0 0 S 0.0 0.0 3:25.40 kworker/0:1 589 root 20 0 0 0 0 S 0.0 0.0 0:16.02 kswapd0 590 root 39 19 0 0 0 S 0.0 0.0 0:30.91 khugepaged 591 root 20 0 0 0 0 S 0.0 0.0 0:00.00 fsnotify_mark 592 root 1 -19 0 0 0 S 0.0 0.0 0:00.00 nfsiod 738 root 20 0 0 0 0 S 0.0 0.0 0:00.01 scsi_eh_0 741 root 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_1 759 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:2 766 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 mpt_poll_0 767 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 mpt/0 773 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/1:2 776 root 20 0 0 0 0 S 0.0 0.0 0:00.00 scsi_eh_2 802 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kpsmoused 830 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 deferwq 845 root 0 -20 0 0 0 S 0.0 0.0 0:00.47 kworker/1:1H 846 root 0 -20 0 0 0 S 0.0 0.0 0:20.87 kworker/0:1H 847 root 20 0 0 0 0 S 0.0 0.0 1:15.67 kjournald 902 root 16 -4 18992 424 8 S 0.0 0.0 0:00.53 udevd

show session info

This command provides information on session parameters set along with counters for packet rate, new connections, etc. Session parameters include, but not limited to, the total and the current number of sessions, timeouts, setup. The first section of the output is dynamic, meaning it'd yield different outputs on every execution of this command.

admin@anuragFW> show session info target-dp: *.dp0 -------------------------------------------------------------------------------- Number of sessions supported: 256000 Number of allocated sessions: 176 Number of active TCP sessions: 166 Number of active UDP sessions: 10 Number of active ICMP sessions: 0 Number of active GTPc sessions: 0 Number of active GTPu sessions: 0 Number of pending GTPu sessions: 0 Number of active BCAST sessions: 0 Number of active MCAST sessions: 0 Number of active predict sessions: 0 Session table utilization: 0% Number of sessions created since bootup: 374948 Packet rate: 12/s Throughput: 8 kbps New connection establish rate: 0 cps -------------------------------------------------------------------------------- Session timeout TCP default timeout: 3600 secs TCP session timeout before SYN-ACK received: 5 secs TCP session timeout before 3-way handshaking: 10 secs TCP half-closed session timeout: 120 secs TCP session timeout in TIME_WAIT: 15 secs TCP session timeout for unverified RST: 30 secs UDP default timeout: 30 secs ICMP default timeout: 6 secs other IP default timeout: 30 secs Captive Portal session timeout: 30 secs Session timeout in discard state: TCP: 90 secs, UDP: 60 secs, other IP protocols: 60 secs -------------------------------------------------------------------------------- Session accelerated aging: True Accelerated aging threshold: 80% of utilization Scaling factor: 2 X -------------------------------------------------------------------------------- Session setup TCP - reject non-SYN first packet: True Hardware session offloading: True IPv6 firewalling: True Strict TCP/IP checksum: True ICMP Unreachable Packet Rate: 200 pps -------------------------------------------------------------------------------- Application trickling scan parameters: Timeout to determine application trickling: 10 secs Resource utilization threshold to start scan: 80% Scan scaling factor over regular aging: 8 -------------------------------------------------------------------------------- Session behavior when resource limit is reached: drop -------------------------------------------------------------------------------- Pcap token bucket rate : 10485760 -------------------------------------------------------------------------------- Max pending queued mcast packets per session : 0 --------------------------------------------------------------------------------

show system statistics session

This command shows real-time values for the count of Active sessions, throughput, packet rate, and (dataplane) uptime (Dataplane uptime). This output window will refresh every few seconds to update the values shown.

admin@anuragFW> show system statistics session System Statistics: ('q' to quit, 'h' for help) Device is up : 18 days 22 hours 3 mins 54 sec Packet rate : 32/s Throughput : 136 Kbps Total active sessions : 166 Active TCP sessions : 157 Active UDP sessions : 9 Active ICMP sessions : 0 You can type the following key to switch what to display -------------------------------------------------------- 'a' - Display application statistics 'h' - Display this help page 'q' - Quit this program 's' - Display system statistics

debug dataplane pool statistics

This command's output has been significantly changed from older versions. It now shows the packet buffers, resource pools and memory cache usages by different processes. If the pools deplete, traffic performance will be affected corresponding to that particular resource pool. Regarding pools, the number of the left shows the remaining while the number on the right shows the total capacity. The total capacity can vary based on platforms, models and OS versions. Likewise, if a certain process uses too much memory, that can also cause issues related to that process.

admin@anuragFW> debug dataplane pool statistics Pow Atomic Memory Pools [ 0] Work Queue Entries : 98300/98304 0xe028378340 [ 1] Packet Buffers : 38474/38912 0xc000a61780 Software Pools [ 0] Shared Pool 24 ( 24): 659564/660000 0xe000467500 [ 1] Shared Pool 32 ( 32): 659521/660000 0xe001607200 [ 2] Shared Pool 40 ( 40): 169989/170000 0xe002cb0000 [ 3] Shared Pool 192 ( 192): 1253777/1255000 0xe0033d2480 [ 4] Shared Pool 256 ( 256): 139968/140000 0xe011e68180 [ 5] software packet buffer 0 ( 512): 16384/16384 0xe02a038900 [ 6] software packet buffer 1 ( 1024): 16384/16384 0xe02a848a80 [ 7] software packet buffer 2 ( 2048): 32768/32768 0xe02b858c00 [ 8] software packet buffer 3 (33280): 8192/8192 0xe02f878d80 [ 9] software packet buffer 4 (66048): 304/304 0xe03fc80f00 [10] CTD AV Block ( 1024): 32/32 0xe0d46fa400 [11] Regex Results (11544): 8000/8000 0xe0d4e80200 2 [12] SSH Handshake State ( 6512): 16/16 0xe0eee36b80 [13] SSH State ( 3200): 128/128 0xe0eee50480 [14] TCP host connections ( 176): 15/16 0xe0eeeb4d00 [15] DFA Result ( 1024): 1024/1024 0xe0f13a6180 [16] GTP Context Entries ( 256): 85336/85336 0xe0f1a3e400 Shared Pools Statistics Current local reuse cache counts for each pool: core 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 24 0 384 0 0 0 0 0 0 0 0 0 0 0 0 0 0 32 0 384 0 0 0 0 0 0 0 0 0 0 0 0 0 0 40 0 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 192 0 765 0 0 0 0 0 0 0 0 0 0 0 0 0 0 256 0 32 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Local Reuse Pools Shared Pool 24 Shared Pool 32 Shared Pool 40 Shared Pool 192 Shared Pool 256 Cached / Max 384/6144 384/6144 11/512 765/12288 32/512 Cached + Free / Total 659948/660000 659905/660000 170000/170000 1254542/1255000 140000/140000 User Quota Threshold Min.Alloc Cur.Alloc Max.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse Data(Pool)-SZ fptcp_seg 65536 0 0 0 0 537140 0 0 536649 16 (24) inner_decode 4000 0 0 0 0 0 0 0 0 16 (24) detector_threat 233016 0 0 22 0 114079 0 0 114023 24 (24) spyware_state 51200 0 0 0 0 0 0 0 0 24 (24) vm_vcheck 81920 0 0 0 0 1322271 0 0 1322148 24 (24) ctd_patmatch 192001 0 0 30 0 14000 0 0 14000 24 (24) proxy_l2info 6400 0 0 0 0 0 0 0 0 24 (24) proxy_pktmr 6400 0 0 0 0 0 0 0 0 16 (24) vm_field 500000 0 0 95 0 5135529 0 0 5134623 32 (32) prl_cookie 6400 0 0 0 0 0 0 0 0 32 (32) decode_filter 81920 0 0 0 0 2194 0 0 2183 40 (40) hash_decode 1024 0 0 0 0 0 0 0 0 104 (192) appid_session 256002 0 0 0 0 314584 0 0 311524 104 (192) appid_dfa_state 256002 0 0 0 0 4 0 0 4 184 (192) cpat_state 64000 1004000 32000 0 0 0 0 0 0 184 (192) sml_regfile 512004 0 0 122 0 560187 0 0 554372 192 (192) ctd_flow 256002 0 0 61 0 280118 0 0 279134 192 (192) ctd_flow_state 256002 0 0 53 0 17008 0 0 15066 176 (192) ctd_dlp_flow 64000 878500 32000 0 0 84 0 0 83 192 (192) proxy_flow 12800 0 0 74 0 13000 0 0 11479 192 (192) prl_st 6400 0 0 0 0 0 0 0 0 192 (192) ssl_hs_st 6400 0 0 0 0 13000 0 0 9727 192 (192) ssl_key_block 12800 0 0 74 0 12908 0 0 12510 192 (192) ssl_st 12800 0 0 74 0 13000 0 0 9815 192 (192) ssl_hs_mac 19200 0 0 0 0 49924 0 0 35277 variable timer_chunk 131072 0 0 0 0 99555 0 0 89397 256 (256) Memory Pool Size 126400KB, start address 0xe000000000 alloc size 853757, max 1315990 fixed buf allocator, size 129430024 sz allocator, page size 32768, max alloc 4096 quant 64 pool 0 element size 64 avail list 5 full list 2 pool 1 element size 128 avail list 7 full list 1 pool 2 element size 192 avail list 1 full list 0 pool 3 element size 256 avail list 1 full list 7 pool 4 element size 320 avail list 2 full list 0 pool 5 element size 384 avail list 1 full list 0 pool 8 element size 576 avail list 1 full list 0 pool 10 element size 704 avail list 15 full list 0 pool 16 element size 1088 avail list 1 full list 0 pool 23 element size 1536 avail list 1 full list 0 pool 24 element size 1600 avail list 1 full list 0 pool 25 element size 1664 avail list 1 full list 2 pool 26 element size 1728 avail list 1 full list 0 pool 27 element size 1792 avail list 1 full list 0 pool 28 element size 1856 avail list 1 full list 0 pool 29 element size 1920 avail list 1 full list 0 pool 31 element size 2048 avail list 1 full list 0 pool 33 element size 2176 avail list 1 full list 0 pool 34 element size 2240 avail list 1 full list 0 pool 35 element size 2304 avail list 1 full list 0 pool 37 element size 2432 avail list 1 full list 0 pool 44 element size 2880 avail list 1 full list 0 pool 53 element size 3456 avail list 1 full list 0 pool 57 element size 3712 avail list 1 full list 0 pool 58 element size 3776 avail list 1 full list 0 parent allocator alloc size 2067189, max 2165493 malloc allocator current usage 2097152 max. usage 2195456, free chunks 3885, total chunks 3949 Mem-Pool-Type MaxSz(KB) Threshold MinSz(KB) CurSz(B) Cur.Alloc Total-Alloc Fail-Thresh Fail-Nomem Local-Reuse(cache) ctd_dlp_buf 3968 63200 1984 0 0 0 0 0 0 (0) proxy 51862 0 0 678056 5333 187433 0 0 84 (1) clientless_sslvpn 51862 0 0 0 0 0 0 0 0 (0) l7_data 2712 0 0 0 0 0 0 0 0 (0) l7_misc 30491 88480 15245 111464 1417 2799 0 0 62 (1) cfg_name_cache 553 88480 276 48 1 1 0 0 0 (0) scantracker 480 75840 240 0 0 0 0 0 0 (0) appinfo 105 88480 52 0 0 0 0 0 0 (0) user 16750 0 0 1544 22 22 0 0 0 (0) userpolicy 8000 0 0 0 0 1602 0 0 0 (0) dns 2048 88480 1024 0 0 0 0 0 0 (0) credential 8192 75840 4096 0 0 0 0 0 0 (0) Cache-Type MAX-Entries Cur-Entries Cur.SZ(B) Insert-Failure Mem-Pool-Type ssl_server_cert 16384 305 24400 0 l7_misc ssl_cert_cn 1250 1105 86488 0 l7_misc ssl_cert_cache 256 94 6016 0 proxy ssl_sess_cache 1000 1000 208000 0 proxy proxy_exclude 1024 5 1200 0 proxy proxy_notify 8192 0 0 0 proxy ctd_block_answer 16384 0 0 0 l7_misc username_cache 4096 1 48 0 cfg_name_cache threatname_cache 4096 0 0 0 cfg_name_cache hipname_cache 256 0 0 0 cfg_name_cache ctd_cp 16384 0 0 0 l7_misc ctd_driveby 4096 0 0 0 l7_misc ctd_pcap 1024 0 0 0 l7_misc ctd_sml 4096 2 96 0 l7_misc ctd_url 50000 5 480 0 l7_misc app_tracker 8192 0 0 0 l7_misc threat_tracker 4096 0 0 0 l7_misc scan_tracker 4096 0 0 0 scantracker app_info 500 0 0 0 appinfo dns 6400 0 0 0 proxy dns_v4 10000 0 0 0 dns dns_v6 10000 0 0 0 dns dns_id 1024 0 0 0 dns tcp_mcb 8256 0 0 0 l7_misc sslvpn_ck_cache 25 0 0 0 clientless_sslvpn user_cache 64000 22 1544 0 user userpolicy_cache 8000 0 0 0 userpolicy

show running resource-monitor

This is the most important command in getting dataplane CPU usages over different time intervals. Usually, if the CPU stays high (>90), traffic would feel sluggish, latency would also rise. The best strategy is to determine a regular 24-hour usage ("baseline") and then compare it to the times when spikes are experienced.

admin@anuragFW> show running resource-monitor > day Per-day monitoring statistics > hour Per-hour monitoring statistics > minute Per-minute monitoring statistics > second Per-second monitoring statistics > week Per-week monitoring statistics | Pipe through a command <Enter> Finish input
admin@anuragFW> show running resource-monitor Resource monitoring sampling data (per second): CPU load sampling by group: flow_lookup : 3% flow_fastpath : 3% flow_slowpath : 3% flow_forwarding : 3% flow_mgmt : 3% flow_ctrl : 3% nac_result : 0% flow_np : 3% dfa_result : 0% module_internal : 3% aho_result : 0% zip_result : 0% pktlog_forwarding : 3% send_out : 3% flow_host : 3% send_host : 3% CPU load (%) during last 60 seconds: core 0 1 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 2 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 2 * 2 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 4 * 3 * 3 * 3 * 3 * 3 * 3 * 2 * 2 * 2 * 1 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 Resource utilization (%) during last 60 seconds: session: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Resource monitoring sampling data (per minute): CPU load (%) during last 60 minutes: core 0 1 avg max avg max * * 3 3 * * 3 8 * * 11 50 * * 8 36 * * 3 4 * * 4 22 * * 3 5 * * 3 4 * * 3 3 * * 3 4 * * 3 3 * * 3 4 * * 3 3 * * 2 3 * * 3 4 * * 3 3 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 3 * * 3 3 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 4 * * 3 3 * * 3 4 * * 3 4 * * 3 4 Resource utilization (%) during last 60 minutes: session (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 session (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Resource monitoring sampling data (per hour): CPU load (%) during last 24 hours: core 0 1 avg max avg max * * 3 36 * * 3 4 * * 3 4 * * 3 6 * * 3 8 * * 3 7 * * 3 5 * * 3 5 * * 3 6 * * 3 5 * * 3 5 * * 3 6 * * 3 8 * * 3 7 * * 3 5 * * 3 9 * * 3 5 * * 3 7 * * 3 6 * * 3 6 * * 3 6 * * 3 6 * * 3 7 * * 3 7 Resource utilization (%) during last 24 hours: session (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 session (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Resource monitoring sampling data (per day): CPU load (%) during last 7 days: core 0 1 avg max avg max * * 3 10 * * 3 12 * * 3 30 * * 3 20 * * 3 49 * * 3 77 * * 3 13 Resource utilization (%) during last 7 days: session (average): 0 0 0 0 0 0 0 session (maximum): 0 0 0 0 0 0 0 packet buffer (average): 0 0 0 0 0 0 0 packet buffer (maximum): 0 0 0 0 0 0 0 packet descriptor (average): 0 0 0 0 0 0 0 packet descriptor (maximum): 0 0 0 0 0 0 0 sw tags descriptor (average): 0 0 0 0 0 0 0 sw tags descriptor (maximum): 0 0 0 0 0 0 0 Resource monitoring sampling data (per week): CPU load (%) during last 13 weeks: core 0 1 avg max avg max * * 3 77 * * 3 74 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Resource utilization (%) during last 13 weeks: session (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 session (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 packet buffer (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 packet descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (average): 0 0 0 0 0 0 0 0 0 0 0 0 0 sw tags descriptor (maximum): 0 0 0 0 0 0 0 0 0 0 0 0 0

show counter global

This command lists all the counters available on the firewall for the given OS version. For every packet that arrives, traverses or even gets dropped, we should see one or more counters go up. These are extremely powerful in troubleshooting traffic related issues when combined with packet-filter.

admin@anuragFW> show counter global Global counters: Elapsed time since last sampling: 491.920 seconds name value rate severity category aspect description -------------------------------------------------------------------------------- pkt_recv 57490275 67 info packet pktproc Packets received pkt_sent 1402176 44 info packet pktproc Packets transmitted pkt_sent_host 359410 0 info packet pktproc Packets successfully transmitted to host interface pkt_stp_rcv 1085307 0 info packet pktproc STP BPDU packets received session_allocated 375497 1 info session resource Sessions allocated session_freed 375449 2 info session resource Sessions freed session_installed 375019 1 info session resource Sessions installed session_discard 85940 0 info session resource Session set to discard by security policy check session_unverified_rst 1034 0 info session pktproc Session aging timer modified by unverified RST ...cut for brevity...

Different filters can be set to narrow the focus on the relevant counters.

admin@anuragFW> show counter global filter severity drop Global counters: Elapsed time since last sampling: 13.407 seconds name value rate severity category aspect description ----------------------------------------------------------------------------------------------------------------- flow_rcv_dot1q_tag_err 46215426 13 drop flow parse Packets dropped: 802.1q tag not configured flow_no_interface 46215426 13 drop flow parse Packets dropped: invalid interface -----------------------------------------------------------------------------------------------------------------

We can also use 'match' sub-command to look for results based on string matching to the argument of 'match'.

admin@anuragFW> show counter global | match deny flow_policy_deny 23905 0 drop flow session Session setup: denied by policy flow_rematch_parent 10 0 info flow pktproc number of rematch deny for parent sessions flow_host_service_deny 1871180 2 drop flow mgmt Device management session denied