Merge branch 'nexthop-cache' into next

Nikolay Aleksandrov  says:

====================

This set tries to help with an old ask that we've had for some time
which is to print nexthop information while monitoring or dumping routes.
The core problem is that people cannot follow nexthop changes while
monitoring route changes, by the time they check the nexthop it could be
deleted or updated to something else. In order to help them out I've
added a nexthop cache which is populated (only used if -d / show_details
is specified) while decoding routes and kept up to date while monitoring.
The nexthop information is printed on its own line starting with the
"nh_info" attribute and its embedded inside it if printing JSON. To
cache the nexthop entries I parse them into structures, in order to
reuse most of the code the print helpers have been altered so they rely
on prepared structures. Nexthops are now always parsed into a structure,
even if they won't be cached, that structure is later used to print the
nexthop and destroyed if not going to be cached. New nexthops (not found
in the cache) are retrieved from the kernel using a private netlink
socket so they don't disrupt an ongoing dump, similar to how interfaces
are retrieved and cached.

I have tested the set with the kernel forwarding selftests and also by
stressing it with nexthop create/update/delete in loops while monitoring.

Comments are very welcome as usual. :)

Changes since RFC:
 - reordered parse/print splits, in order to do that I have to parse
   resilient groups first, then add nh entry parsing so code has been
   reordered as well and patch order has changed, but there have been
   no functional changes (as before refactoring of old code is done in
   the first 8 patches and then patches 9-12 add the new cache and use it)
 - re-run all tests above

Patch breakdown:
Patches 1-2: update current route helpers to take parsed arguments so we
             can directly pass them from the nh_entry structure later
Patch     3: adds new nha_res_grp structure which describes a resilient
             nexhtop group
Patch     4: splits print_nh_res_group into a parse and print parts
             which use the new nha_res_grp structure
Patch     5: adds new nh_entry structure which describes a nexthop
Patch     6: factors out print_nexthop's attribute parsing into nh_entry
             structure used before printing
Patch     7: factors out print_nexthop's nh_entry structure printing
Patch     8: factors out ipnh_get's rtnl talk part and allows to use a
             different rt handle for the communication
Patch     9: adds nexthop cache and helpers to manage it, it uses the
             new __ipnh_get to retrieve nexthops
Patch    10: adds a new helper print_cache_nexthop_id that prints nexthop
             information from its id, if the nexthop is not found in the
             cache it fetches it
Patch    11: the new print_cache_nexthop_id helper is used when printing
             routes with show_details (-d) to output detailed nexthop
             information, the format after nh_info is the same as
             ip nexthop show
Patch    12: changes print_nexthop into print_cache_nexthop which always
             outputs the nexthop information and can also update the cache
             (based on process_cache argument), it's used to keep the
             cache up to date while monitoring

Example outputs (monitor):
[NEXTHOP]id 101 via 169.254.2.22 dev veth2 scope link proto unspec
[NEXTHOP]id 102 via 169.254.3.23 dev veth4 scope link proto unspec
[NEXTHOP]id 103 group 101/102 type resilient buckets 512 idle_timer 0 unbalanced_timer 0 unbalanced_time 0 scope global proto unspec
[ROUTE]unicast 192.0.2.0/24 nhid 203 table 4 proto boot scope global
	nh_info id 203 group 201/202 type resilient buckets 512 idle_timer 0 unbalanced_timer 0 unbalanced_time 0 scope global proto unspec
	nexthop via 169.254.2.12 dev veth3 weight 1
	nexthop via 169.254.3.13 dev veth5 weight 1

[NEXTHOP]id 204 via fe80:2::12 dev veth3 scope link proto unspec
[NEXTHOP]id 205 via fe80:3::13 dev veth5 scope link proto unspec
[NEXTHOP]id 206 group 204/205 type resilient buckets 512 idle_timer 0 unbalanced_timer 0 unbalanced_time 0 scope global proto unspec
[ROUTE]unicast 2001:db8:1::/64 nhid 206 table 4 proto boot scope global metric 1024 pref medium
	nh_info id 206 group 204/205 type resilient buckets 512 idle_timer 0 unbalanced_timer 0 unbalanced_time 0 scope global proto unspec
	nexthop via fe80:2::12 dev veth3 weight 1
	nexthop via fe80:3::13 dev veth5 weight 1

[NEXTHOP]id 2  encap mpls  200/300 via 10.1.1.1 dev ens20 scope link proto unspec onlink
[ROUTE]unicast 2.3.4.10 nhid 2 table main proto boot scope global
	nh_info id 2  encap mpls  200/300 via 10.1.1.1 dev ens20 scope link proto unspec onlink

JSON:
 {
        "type": "unicast",
        "dst": "198.51.100.0/24",
        "nhid": 103,
        "table": "3",
        "protocol": "boot",
        "scope": "global",
        "flags": [ ],
        "nh_info": {
            "id": 103,
            "group": [ {
                    "id": 101,
                    "weight": 11
                },{
                    "id": 102,
                    "weight": 45
                } ],
            "type": "resilient",
            "resilient_args": {
                "buckets": 512,
                "idle_timer": 0,
                "unbalanced_timer": 0,
                "unbalanced_time": 0
            },
            "scope": "global",
            "protocol": "unspec",
            "flags": [ ]
        },
        "nexthops": [ {
                "gateway": "169.254.2.22",
                "dev": "veth2",
                "weight": 11,
                "flags": [ ]
            },{
                "gateway": "169.254.3.23",
                "dev": "veth4",
                "weight": 45,
                "flags": [ ]
            } ]
  }

====================

Signed-off-by: David Ahern <dsahern@kernel.org>
This commit is contained in:
David Ahern 2021-10-03 18:31:44 -06:00
commit ec703e0629
5 changed files with 457 additions and 112 deletions

View File

@ -53,7 +53,6 @@ int print_prefix(struct nlmsghdr *n, void *arg);
int print_rule(struct nlmsghdr *n, void *arg);
int print_netconf(struct rtnl_ctrl_data *ctrl,
struct nlmsghdr *n, void *arg);
int print_nexthop(struct nlmsghdr *n, void *arg);
int print_nexthop_bucket(struct nlmsghdr *n, void *arg);
void netns_map_init(void);
void netns_nsid_socket_init(void);
@ -168,7 +167,8 @@ int name_is_vrf(const char *name);
void print_num(FILE *fp, unsigned int width, uint64_t count);
void print_rt_flags(FILE *fp, unsigned int flags);
void print_rta_if(FILE *fp, const struct rtattr *rta, const char *prefix);
void print_rta_ifidx(FILE *fp, __u32 ifidx, const char *prefix);
void __print_rta_gateway(FILE *fp, unsigned char family, const char *gateway);
void print_rta_gateway(FILE *fp, unsigned char family,
const struct rtattr *rta);
#endif /* _IP_COMMON_H_ */

View File

@ -22,6 +22,7 @@
#include "utils.h"
#include "ip_common.h"
#include "nh_common.h"
static void usage(void) __attribute__((noreturn));
static int prefix_banner;
@ -88,7 +89,7 @@ static int accept_msg(struct rtnl_ctrl_data *ctrl,
case RTM_NEWNEXTHOP:
case RTM_DELNEXTHOP:
print_headers(fp, "[NEXTHOP]", ctrl);
print_nexthop(n, arg);
print_cache_nexthop(n, arg, true);
return 0;
case RTM_NEWNEXTHOPBUCKET:

View File

@ -13,6 +13,7 @@
#include "utils.h"
#include "ip_common.h"
#include "nh_common.h"
static struct {
unsigned int flushed;
@ -33,6 +34,8 @@ enum {
#define RTM_NHA(h) ((struct rtattr *)(((char *)(h)) + \
NLMSG_ALIGN(sizeof(struct nhmsg))))
static struct hlist_head nh_cache[NH_CACHE_SIZE];
static void usage(void) __attribute__((noreturn));
static void usage(void)
@ -212,28 +215,29 @@ out:
return rc;
}
static void print_nh_group(FILE *fp, const struct rtattr *grps_attr)
static bool __valid_nh_group_attr(const struct rtattr *g_attr)
{
struct nexthop_grp *nhg = RTA_DATA(grps_attr);
int num = RTA_PAYLOAD(grps_attr) / sizeof(*nhg);
int i;
int num = RTA_PAYLOAD(g_attr) / sizeof(struct nexthop_grp);
if (!num || num * sizeof(*nhg) != RTA_PAYLOAD(grps_attr)) {
fprintf(fp, "<invalid nexthop group>");
return;
}
return num && num * sizeof(struct nexthop_grp) == RTA_PAYLOAD(g_attr);
}
static void print_nh_group(const struct nh_entry *nhe)
{
int i;
open_json_array(PRINT_JSON, "group");
print_string(PRINT_FP, NULL, "%s", "group ");
for (i = 0; i < num; ++i) {
for (i = 0; i < nhe->nh_groups_cnt; ++i) {
open_json_object(NULL);
if (i)
print_string(PRINT_FP, NULL, "%s", "/");
print_uint(PRINT_ANY, "id", "%u", nhg[i].id);
if (nhg[i].weight)
print_uint(PRINT_ANY, "weight", ",%u", nhg[i].weight + 1);
print_uint(PRINT_ANY, "id", "%u", nhe->nh_groups[i].id);
if (nhe->nh_groups[i].weight)
print_uint(PRINT_ANY, "weight", ",%u",
nhe->nh_groups[i].weight + 1);
close_json_object();
}
@ -253,50 +257,59 @@ static const char *nh_group_type_name(__u16 type)
}
}
static void print_nh_group_type(FILE *fp, const struct rtattr *grp_type_attr)
static void print_nh_group_type(__u16 nh_grp_type)
{
__u16 type = rta_getattr_u16(grp_type_attr);
if (type == NEXTHOP_GRP_TYPE_MPATH)
if (nh_grp_type == NEXTHOP_GRP_TYPE_MPATH)
/* Do not print type in order not to break existing output. */
return;
print_string(PRINT_ANY, "type", "type %s ", nh_group_type_name(type));
print_string(PRINT_ANY, "type", "type %s ", nh_group_type_name(nh_grp_type));
}
static void print_nh_res_group(FILE *fp, const struct rtattr *res_grp_attr)
static void parse_nh_res_group_rta(const struct rtattr *res_grp_attr,
struct nha_res_grp *res_grp)
{
struct rtattr *tb[NHA_RES_GROUP_MAX + 1];
struct rtattr *rta;
struct timeval tv;
memset(res_grp, 0, sizeof(*res_grp));
parse_rtattr_nested(tb, NHA_RES_GROUP_MAX, res_grp_attr);
open_json_object("resilient_args");
if (tb[NHA_RES_GROUP_BUCKETS])
print_uint(PRINT_ANY, "buckets", "buckets %u ",
rta_getattr_u16(tb[NHA_RES_GROUP_BUCKETS]));
res_grp->buckets = rta_getattr_u16(tb[NHA_RES_GROUP_BUCKETS]);
if (tb[NHA_RES_GROUP_IDLE_TIMER]) {
rta = tb[NHA_RES_GROUP_IDLE_TIMER];
__jiffies_to_tv(&tv, rta_getattr_u32(rta));
print_tv(PRINT_ANY, "idle_timer", "idle_timer %g ", &tv);
res_grp->idle_timer = rta_getattr_u32(rta);
}
if (tb[NHA_RES_GROUP_UNBALANCED_TIMER]) {
rta = tb[NHA_RES_GROUP_UNBALANCED_TIMER];
__jiffies_to_tv(&tv, rta_getattr_u32(rta));
print_tv(PRINT_ANY, "unbalanced_timer", "unbalanced_timer %g ",
&tv);
res_grp->unbalanced_timer = rta_getattr_u32(rta);
}
if (tb[NHA_RES_GROUP_UNBALANCED_TIME]) {
rta = tb[NHA_RES_GROUP_UNBALANCED_TIME];
__jiffies_to_tv(&tv, rta_getattr_u32(rta));
print_tv(PRINT_ANY, "unbalanced_time", "unbalanced_time %g ",
&tv);
res_grp->unbalanced_time = rta_getattr_u64(rta);
}
}
static void print_nh_res_group(const struct nha_res_grp *res_grp)
{
struct timeval tv;
open_json_object("resilient_args");
print_uint(PRINT_ANY, "buckets", "buckets %u ", res_grp->buckets);
__jiffies_to_tv(&tv, res_grp->idle_timer);
print_tv(PRINT_ANY, "idle_timer", "idle_timer %g ", &tv);
__jiffies_to_tv(&tv, res_grp->unbalanced_timer);
print_tv(PRINT_ANY, "unbalanced_timer", "unbalanced_timer %g ", &tv);
__jiffies_to_tv(&tv, res_grp->unbalanced_time);
print_tv(PRINT_ANY, "unbalanced_time", "unbalanced_time %g ", &tv);
close_json_object();
}
@ -328,14 +341,325 @@ static void print_nh_res_bucket(FILE *fp, const struct rtattr *res_bucket_attr)
close_json_object();
}
int print_nexthop(struct nlmsghdr *n, void *arg)
static void ipnh_destroy_entry(struct nh_entry *nhe)
{
if (nhe->nh_encap)
free(nhe->nh_encap);
if (nhe->nh_groups)
free(nhe->nh_groups);
}
/* parse nhmsg into nexthop entry struct which must be destroyed by
* ipnh_destroy_enty when it's not needed anymore
*/
static int ipnh_parse_nhmsg(FILE *fp, const struct nhmsg *nhm, int len,
struct nh_entry *nhe)
{
struct rtattr *tb[NHA_MAX+1];
int err = 0;
memset(nhe, 0, sizeof(*nhe));
parse_rtattr_flags(tb, NHA_MAX, RTM_NHA(nhm), len, NLA_F_NESTED);
if (tb[NHA_ID])
nhe->nh_id = rta_getattr_u32(tb[NHA_ID]);
if (tb[NHA_OIF])
nhe->nh_oif = rta_getattr_u32(tb[NHA_OIF]);
if (tb[NHA_GROUP_TYPE])
nhe->nh_grp_type = rta_getattr_u16(tb[NHA_GROUP_TYPE]);
if (tb[NHA_GATEWAY]) {
if (RTA_PAYLOAD(tb[NHA_GATEWAY]) > sizeof(nhe->nh_gateway)) {
fprintf(fp, "<nexthop id %u invalid gateway length %lu>\n",
nhe->nh_id, RTA_PAYLOAD(tb[NHA_GATEWAY]));
err = -EINVAL;
goto out_err;
}
nhe->nh_gateway_len = RTA_PAYLOAD(tb[NHA_GATEWAY]);
memcpy(&nhe->nh_gateway, RTA_DATA(tb[NHA_GATEWAY]),
RTA_PAYLOAD(tb[NHA_GATEWAY]));
}
if (tb[NHA_ENCAP]) {
nhe->nh_encap = malloc(RTA_LENGTH(RTA_PAYLOAD(tb[NHA_ENCAP])));
if (!nhe->nh_encap) {
err = -ENOMEM;
goto out_err;
}
memcpy(nhe->nh_encap, tb[NHA_ENCAP],
RTA_LENGTH(RTA_PAYLOAD(tb[NHA_ENCAP])));
memcpy(&nhe->nh_encap_type, tb[NHA_ENCAP_TYPE],
sizeof(nhe->nh_encap_type));
}
if (tb[NHA_GROUP]) {
if (!__valid_nh_group_attr(tb[NHA_GROUP])) {
fprintf(fp, "<nexthop id %u invalid nexthop group>",
nhe->nh_id);
err = -EINVAL;
goto out_err;
}
nhe->nh_groups = malloc(RTA_PAYLOAD(tb[NHA_GROUP]));
if (!nhe->nh_groups) {
err = -ENOMEM;
goto out_err;
}
nhe->nh_groups_cnt = RTA_PAYLOAD(tb[NHA_GROUP]) /
sizeof(struct nexthop_grp);
memcpy(nhe->nh_groups, RTA_DATA(tb[NHA_GROUP]),
RTA_PAYLOAD(tb[NHA_GROUP]));
}
if (tb[NHA_RES_GROUP]) {
parse_nh_res_group_rta(tb[NHA_RES_GROUP], &nhe->nh_res_grp);
nhe->nh_has_res_grp = true;
}
nhe->nh_blackhole = !!tb[NHA_BLACKHOLE];
nhe->nh_fdb = !!tb[NHA_FDB];
nhe->nh_family = nhm->nh_family;
nhe->nh_protocol = nhm->nh_protocol;
nhe->nh_scope = nhm->nh_scope;
nhe->nh_flags = nhm->nh_flags;
return 0;
out_err:
ipnh_destroy_entry(nhe);
return err;
}
static void __print_nexthop_entry(FILE *fp, const char *jsobj,
struct nh_entry *nhe,
bool deleted)
{
SPRINT_BUF(b1);
open_json_object(jsobj);
if (deleted)
print_bool(PRINT_ANY, "deleted", "Deleted ", true);
print_uint(PRINT_ANY, "id", "id %u ", nhe->nh_id);
if (nhe->nh_groups)
print_nh_group(nhe);
print_nh_group_type(nhe->nh_grp_type);
if (nhe->nh_has_res_grp)
print_nh_res_group(&nhe->nh_res_grp);
if (nhe->nh_encap)
lwt_print_encap(fp, &nhe->nh_encap_type.rta, nhe->nh_encap);
if (nhe->nh_gateway_len)
__print_rta_gateway(fp, nhe->nh_family,
format_host(nhe->nh_family,
nhe->nh_gateway_len,
&nhe->nh_gateway));
if (nhe->nh_oif)
print_rta_ifidx(fp, nhe->nh_oif, "dev");
if (nhe->nh_scope != RT_SCOPE_UNIVERSE || show_details > 0) {
print_string(PRINT_ANY, "scope", "scope %s ",
rtnl_rtscope_n2a(nhe->nh_scope, b1, sizeof(b1)));
}
if (nhe->nh_blackhole)
print_null(PRINT_ANY, "blackhole", "blackhole ", NULL);
if (nhe->nh_protocol != RTPROT_UNSPEC || show_details > 0) {
print_string(PRINT_ANY, "protocol", "proto %s ",
rtnl_rtprot_n2a(nhe->nh_protocol, b1, sizeof(b1)));
}
print_rt_flags(fp, nhe->nh_flags);
if (nhe->nh_fdb)
print_null(PRINT_ANY, "fdb", "fdb", NULL);
close_json_object();
}
static int __ipnh_get_id(struct rtnl_handle *rthp, __u32 nh_id,
struct nlmsghdr **answer)
{
struct {
struct nlmsghdr n;
struct nhmsg nhm;
char buf[1024];
} req = {
.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct nhmsg)),
.n.nlmsg_flags = NLM_F_REQUEST,
.n.nlmsg_type = RTM_GETNEXTHOP,
.nhm.nh_family = preferred_family,
};
addattr32(&req.n, sizeof(req), NHA_ID, nh_id);
return rtnl_talk(rthp, &req.n, answer);
}
static struct hlist_head *ipnh_cache_head(__u32 nh_id)
{
nh_id ^= nh_id >> 20;
nh_id ^= nh_id >> 10;
return &nh_cache[nh_id % NH_CACHE_SIZE];
}
static void ipnh_cache_link_entry(struct nh_entry *nhe)
{
struct hlist_head *head = ipnh_cache_head(nhe->nh_id);
hlist_add_head(&nhe->nh_hash, head);
}
static void ipnh_cache_unlink_entry(struct nh_entry *nhe)
{
hlist_del(&nhe->nh_hash);
}
static struct nh_entry *ipnh_cache_get(__u32 nh_id)
{
struct hlist_head *head = ipnh_cache_head(nh_id);
struct nh_entry *nhe;
struct hlist_node *n;
hlist_for_each(n, head) {
nhe = container_of(n, struct nh_entry, nh_hash);
if (nhe->nh_id == nh_id)
return nhe;
}
return NULL;
}
static int __ipnh_cache_parse_nlmsg(const struct nlmsghdr *n,
struct nh_entry *nhe)
{
int err, len;
len = n->nlmsg_len - NLMSG_SPACE(sizeof(struct nhmsg));
if (len < 0) {
fprintf(stderr, "BUG: wrong nlmsg len %d\n", len);
return -EINVAL;
}
err = ipnh_parse_nhmsg(stderr, NLMSG_DATA(n), len, nhe);
if (err) {
fprintf(stderr, "Error parsing nexthop: %s\n", strerror(-err));
return err;
}
return 0;
}
static struct nh_entry *ipnh_cache_add(__u32 nh_id)
{
struct rtnl_handle cache_rth = { .fd = -1 };
struct nlmsghdr *answer = NULL;
struct nh_entry *nhe = NULL;
if (rtnl_open(&cache_rth, 0) < 0)
goto out;
if (__ipnh_get_id(&cache_rth, nh_id, &answer) < 0)
goto out;
nhe = malloc(sizeof(*nhe));
if (!nhe)
goto out;
if (__ipnh_cache_parse_nlmsg(answer, nhe))
goto out_free_nhe;
ipnh_cache_link_entry(nhe);
out:
if (answer)
free(answer);
rtnl_close(&cache_rth);
return nhe;
out_free_nhe:
free(nhe);
nhe = NULL;
goto out;
}
static void ipnh_cache_del(struct nh_entry *nhe)
{
ipnh_cache_unlink_entry(nhe);
ipnh_destroy_entry(nhe);
free(nhe);
}
/* update, add or delete a nexthop entry based on nlmsghdr */
static int ipnh_cache_process_nlmsg(const struct nlmsghdr *n,
struct nh_entry *new_nhe)
{
struct nh_entry *nhe;
nhe = ipnh_cache_get(new_nhe->nh_id);
switch (n->nlmsg_type) {
case RTM_DELNEXTHOP:
if (nhe)
ipnh_cache_del(nhe);
ipnh_destroy_entry(new_nhe);
break;
case RTM_NEWNEXTHOP:
if (!nhe) {
nhe = malloc(sizeof(*nhe));
if (!nhe) {
ipnh_destroy_entry(new_nhe);
return -1;
}
} else {
/* this allows us to save 1 allocation on updates by
* reusing the old nh entry, but we need to cleanup its
* internal storage
*/
ipnh_cache_unlink_entry(nhe);
ipnh_destroy_entry(nhe);
}
memcpy(nhe, new_nhe, sizeof(*nhe));
ipnh_cache_link_entry(nhe);
break;
}
return 0;
}
void print_cache_nexthop_id(FILE *fp, const char *fp_prefix, const char *jsobj,
__u32 nh_id)
{
struct nh_entry *nhe = ipnh_cache_get(nh_id);
if (!nhe) {
nhe = ipnh_cache_add(nh_id);
if (!nhe)
return;
}
if (fp_prefix)
print_string(PRINT_FP, NULL, "%s", fp_prefix);
__print_nexthop_entry(fp, jsobj, nhe, false);
}
int print_cache_nexthop(struct nlmsghdr *n, void *arg, bool process_cache)
{
struct nhmsg *nhm = NLMSG_DATA(n);
struct rtattr *tb[NHA_MAX+1];
FILE *fp = (FILE *)arg;
int len;
SPRINT_BUF(b1);
struct nh_entry nhe;
int len, err;
if (n->nlmsg_type != RTM_DELNEXTHOP &&
n->nlmsg_type != RTM_NEWNEXTHOP) {
@ -354,60 +678,29 @@ int print_nexthop(struct nlmsghdr *n, void *arg)
if (filter.proto && filter.proto != nhm->nh_protocol)
return 0;
parse_rtattr_flags(tb, NHA_MAX, RTM_NHA(nhm), len, NLA_F_NESTED);
open_json_object(NULL);
if (n->nlmsg_type == RTM_DELNEXTHOP)
print_bool(PRINT_ANY, "deleted", "Deleted ", true);
if (tb[NHA_ID])
print_uint(PRINT_ANY, "id", "id %u ",
rta_getattr_u32(tb[NHA_ID]));
if (tb[NHA_GROUP])
print_nh_group(fp, tb[NHA_GROUP]);
if (tb[NHA_GROUP_TYPE])
print_nh_group_type(fp, tb[NHA_GROUP_TYPE]);
if (tb[NHA_RES_GROUP])
print_nh_res_group(fp, tb[NHA_RES_GROUP]);
if (tb[NHA_ENCAP])
lwt_print_encap(fp, tb[NHA_ENCAP_TYPE], tb[NHA_ENCAP]);
if (tb[NHA_GATEWAY])
print_rta_gateway(fp, nhm->nh_family, tb[NHA_GATEWAY]);
if (tb[NHA_OIF])
print_rta_if(fp, tb[NHA_OIF], "dev");
if (nhm->nh_scope != RT_SCOPE_UNIVERSE || show_details > 0) {
print_string(PRINT_ANY, "scope", "scope %s ",
rtnl_rtscope_n2a(nhm->nh_scope, b1, sizeof(b1)));
err = ipnh_parse_nhmsg(fp, nhm, len, &nhe);
if (err) {
close_json_object();
fprintf(stderr, "Error parsing nexthop: %s\n", strerror(-err));
return -1;
}
if (tb[NHA_BLACKHOLE])
print_null(PRINT_ANY, "blackhole", "blackhole ", NULL);
if (nhm->nh_protocol != RTPROT_UNSPEC || show_details > 0) {
print_string(PRINT_ANY, "protocol", "proto %s ",
rtnl_rtprot_n2a(nhm->nh_protocol, b1, sizeof(b1)));
}
print_rt_flags(fp, nhm->nh_flags);
if (tb[NHA_FDB])
print_null(PRINT_ANY, "fdb", "fdb", NULL);
__print_nexthop_entry(fp, NULL, &nhe, n->nlmsg_type == RTM_DELNEXTHOP);
print_string(PRINT_FP, NULL, "%s", "\n");
close_json_object();
fflush(fp);
if (process_cache)
ipnh_cache_process_nlmsg(n, &nhe);
else
ipnh_destroy_entry(&nhe);
return 0;
}
static int print_nexthop_nocache(struct nlmsghdr *n, void *arg)
{
return print_cache_nexthop(n, arg, false);
}
int print_nexthop_bucket(struct nlmsghdr *n, void *arg)
{
struct nhmsg *nhm = NLMSG_DATA(n);
@ -712,26 +1005,14 @@ static int ipnh_modify(int cmd, unsigned int flags, int argc, char **argv)
static int ipnh_get_id(__u32 id)
{
struct {
struct nlmsghdr n;
struct nhmsg nhm;
char buf[1024];
} req = {
.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct nhmsg)),
.n.nlmsg_flags = NLM_F_REQUEST,
.n.nlmsg_type = RTM_GETNEXTHOP,
.nhm.nh_family = preferred_family,
};
struct nlmsghdr *answer;
addattr32(&req.n, sizeof(req), NHA_ID, id);
if (rtnl_talk(&rth, &req.n, &answer) < 0)
if (__ipnh_get_id(&rth, id, &answer) < 0)
return -2;
new_json_obj(json);
if (print_nexthop(answer, (void *)stdout) < 0) {
if (print_nexthop_nocache(answer, (void *)stdout) < 0) {
free(answer);
return -1;
}
@ -816,7 +1097,7 @@ static int ipnh_list_flush(int argc, char **argv, int action)
new_json_obj(json);
if (rtnl_dump_filter(&rth, print_nexthop, stdout) < 0) {
if (rtnl_dump_filter(&rth, print_nexthop_nocache, stdout) < 0) {
fprintf(stderr, "Dump terminated\n");
return -2;
}

View File

@ -28,6 +28,7 @@
#include "rt_names.h"
#include "utils.h"
#include "ip_common.h"
#include "nh_common.h"
#ifndef RTAX_RTTVAR
#define RTAX_RTTVAR RTAX_HOPS
@ -410,13 +411,13 @@ static void print_rt_pref(FILE *fp, unsigned int pref)
}
}
void print_rta_if(FILE *fp, const struct rtattr *rta, const char *prefix)
void print_rta_ifidx(FILE *fp, __u32 ifidx, const char *prefix)
{
const char *ifname = ll_index_to_name(rta_getattr_u32(rta));
const char *ifname = ll_index_to_name(ifidx);
if (is_json_context())
if (is_json_context()) {
print_string(PRINT_JSON, prefix, NULL, ifname);
else {
} else {
fprintf(fp, "%s ", prefix);
color_fprintf(fp, COLOR_IFNAME, "%s ", ifname);
}
@ -547,13 +548,11 @@ static void print_rta_newdst(FILE *fp, const struct rtmsg *r,
}
}
void print_rta_gateway(FILE *fp, unsigned char family, const struct rtattr *rta)
void __print_rta_gateway(FILE *fp, unsigned char family, const char *gateway)
{
const char *gateway = format_host_rta(family, rta);
if (is_json_context())
if (is_json_context()) {
print_string(PRINT_JSON, "gateway", NULL, gateway);
else {
} else {
fprintf(fp, "via ");
print_color_string(PRINT_FP,
ifa_family_color(family),
@ -561,6 +560,13 @@ void print_rta_gateway(FILE *fp, unsigned char family, const struct rtattr *rta)
}
}
void print_rta_gateway(FILE *fp, unsigned char family, const struct rtattr *rta)
{
const char *gateway = format_host_rta(family, rta);
__print_rta_gateway(fp, family, gateway);
}
static void print_rta_via(FILE *fp, const struct rtattr *rta)
{
size_t len = RTA_PAYLOAD(rta) - 2;
@ -862,7 +868,7 @@ int print_route(struct nlmsghdr *n, void *arg)
print_rta_via(fp, tb[RTA_VIA]);
if (tb[RTA_OIF] && filter.oifmask != -1)
print_rta_if(fp, tb[RTA_OIF], "dev");
print_rta_ifidx(fp, rta_getattr_u32(tb[RTA_OIF]), "dev");
if (table && (table != RT_TABLE_MAIN || show_details > 0) && !filter.tb)
print_string(PRINT_ANY,
@ -946,7 +952,7 @@ int print_route(struct nlmsghdr *n, void *arg)
print_rta_metrics(fp, tb[RTA_METRICS]);
if (tb[RTA_IIF] && filter.iifmask != -1)
print_rta_if(fp, tb[RTA_IIF], "iif");
print_rta_ifidx(fp, rta_getattr_u32(tb[RTA_IIF]), "iif");
if (tb[RTA_PREF])
print_rt_pref(fp, rta_getattr_u8(tb[RTA_PREF]));
@ -963,6 +969,10 @@ int print_route(struct nlmsghdr *n, void *arg)
propagate ? "enabled" : "disabled");
}
if (tb[RTA_NH_ID] && show_details)
print_cache_nexthop_id(fp, "\n\tnh_info ", "nh_info",
rta_getattr_u32(tb[RTA_NH_ID]));
if (tb[RTA_MULTIPATH])
print_rta_multipath(fp, r, tb[RTA_MULTIPATH]);

53
ip/nh_common.h Normal file
View File

@ -0,0 +1,53 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __NH_COMMON_H__
#define __NH_COMMON_H__ 1
#include <list.h>
#define NH_CACHE_SIZE 1024
struct nha_res_grp {
__u16 buckets;
__u32 idle_timer;
__u32 unbalanced_timer;
__u64 unbalanced_time;
};
struct nh_entry {
struct hlist_node nh_hash;
__u32 nh_id;
__u32 nh_oif;
__u32 nh_flags;
__u16 nh_grp_type;
__u8 nh_family;
__u8 nh_scope;
__u8 nh_protocol;
bool nh_blackhole;
bool nh_fdb;
int nh_gateway_len;
union {
__be32 ipv4;
struct in6_addr ipv6;
} nh_gateway;
struct rtattr *nh_encap;
union {
struct rtattr rta;
__u8 _buf[RTA_LENGTH(sizeof(__u16))];
} nh_encap_type;
bool nh_has_res_grp;
struct nha_res_grp nh_res_grp;
int nh_groups_cnt;
struct nexthop_grp *nh_groups;
};
void print_cache_nexthop_id(FILE *fp, const char *fp_prefix, const char *jsobj,
__u32 nh_id);
int print_cache_nexthop(struct nlmsghdr *n, void *arg, bool process_cache);
#endif /* __NH_COMMON_H__ */